text
stringlengths
100
500k
subset
stringclasses
4 values
Physical property transferred to objects to perform heating or work This article is about the scalar physical quantity. For an overview of and topical guide to energy, see Outline of energy. For other uses, see Energy (disambiguation). "Energetic" redirects here. For other uses, see Energetic (disambiguation). The Sun is the source of energy for most of life on Earth. As a star, the Sun is heated to high temperatures by the conversion of nuclear binding energy due to the fusion of hydrogen in its core. This energy is ultimately transferred (released) into space mainly in the form of radiant (light) energy. Common symbols SI unit erg, calorie, kcal, BTU, kW⋅h, eV In SI base units J = kg m2 s−2 Extensive? Conserved? M L2 T−2 In physics, energy is the quantitative property that must be transferred to an object in order to perform work on, or to heat, the object.[note 1] Energy is a conserved quantity; the law of conservation of energy states that energy can be converted in form, but not created or destroyed. The SI unit of energy is the joule, which is the energy transferred to an object by the work of moving it a distance of 1 metre against a force of 1 newton. Common forms of energy include the kinetic energy of a moving object, the potential energy stored by an object's position in a force field (gravitational, electric or magnetic), the elastic energy stored by stretching solid objects, the chemical energy released when a fuel burns, the radiant energy carried by light, and the thermal energy due to an object's temperature. Mass and energy are closely related. Due to mass–energy equivalence, any object that has mass when stationary (called rest mass) also has an equivalent amount of energy whose form is called rest energy, and any additional energy (of any form) acquired by the object above that rest energy will increase the object's total mass just as it increases its total energy. For example, after heating an object, its increase in energy could be measured as a small increase in mass, with a sensitive enough scale. Living organisms require exergy to stay alive, such as the energy humans get from food. Human civilization requires energy to function, which it gets from energy resources such as fossil fuels, nuclear fuel, or renewable energy. The processes of Earth's climate and ecosystem are driven by the radiant energy Earth receives from the sun and the geothermal energy contained within the earth. Scientific use Conservation of energy and mass in transformation Reversible and non-reversible transformations Energy transfer Closed systems Equipartition of energy In a typical lightning strike, 500 megajoules of electric potential energy is converted into the same amount of energy in other forms, mostly light energy, sound energy and thermal energy. Thermal energy is energy of microscopic constituents of matter, which may include both kinetic and potential energy. The total energy of a system can be subdivided and classified into potential energy, kinetic energy, or combinations of the two in various ways. Kinetic energy is determined by the movement of an object – or the composite motion of the components of an object – and potential energy reflects the potential of an object to have motion, and generally is a function of the position of an object within a field or may be stored in the field itself. While these two categories are sufficient to describe all forms of energy, it is often convenient to refer to particular combinations of potential and kinetic energy as its own form. For example, macroscopic mechanical energy is the sum of translational and rotational kinetic and potential energy in a system neglects the kinetic energy due to temperature, and nuclear energy which combines utilize potentials from the nuclear force and the weak force), among others.[citation needed] Some forms of energy (that an object or system can have as a measurable property) Type of energy Mechanical the sum of macroscopic translational and rotational kinetic and potential energies Electric potential energy due to or stored in electric fields Magnetic potential energy due to or stored in magnetic fields Gravitational potential energy due to or stored in gravitational fields Chemical potential energy due to chemical bonds Ionization potential energy that binds an electron to its atom or molecule Nuclear potential energy that binds nucleons to form the atomic nucleus (and nuclear reactions) Chromodynamic potential energy that binds quarks to form hadrons Elastic potential energy due to the deformation of a material (or its container) exhibiting a restorative force Mechanical wave kinetic and potential energy in an elastic material due to a propagated deformational wave Sound wave kinetic and potential energy in a fluid due to a sound propagated wave (a particular form of mechanical wave) Radiant potential energy stored in the fields of propagated by electromagnetic radiation, including light Rest potential energy due to an object's rest mass Thermal kinetic energy of the microscopic motion of particles, a form of disordered equivalent of mechanical energy Main articles: History of energy and timeline of thermodynamics, statistical mechanics, and random processes Thomas Young, the first person to use the term "energy" in the modern sense. The word energy derives from the Ancient Greek: ἐνέργεια, romanized: energeia, lit. 'activity, operation',[1] which possibly appears for the first time in the work of Aristotle in the 4th century BC. In contrast to the modern definition, energeia was a qualitative philosophical concept, broad enough to include ideas such as happiness and pleasure. In the late 17th century, Gottfried Leibniz proposed the idea of the Latin: vis viva, or living force, which defined as the product of the mass of an object and its velocity squared; he believed that total vis viva was conserved. To account for slowing due to friction, Leibniz theorized that thermal energy consisted of the random motion of the constituent parts of matter, although it would be more than a century until this was generally accepted. The modern analog of this property, kinetic energy, differs from vis viva only by a factor of two. In 1807, Thomas Young was possibly the first to use the term "energy" instead of vis viva, in its modern sense.[2] Gustave-Gaspard Coriolis described "kinetic energy" in 1829 in its modern sense, and in 1853, William Rankine coined the term "potential energy". The law of conservation of energy was also first postulated in the early 19th century, and applies to any isolated system. It was argued for some years whether heat was a physical substance, dubbed the caloric, or merely a physical quantity, such as momentum. In 1845 James Prescott Joule discovered the link between mechanical work and the generation of heat. These developments led to the theory of conservation of energy, formalized largely by William Thomson (Lord Kelvin) as the field of thermodynamics. Thermodynamics aided the rapid development of explanations of chemical processes by Rudolf Clausius, Josiah Willard Gibbs, and Walther Nernst. It also led to a mathematical formulation of the concept of entropy by Clausius and to the introduction of laws of radiant energy by Jožef Stefan. According to Noether's theorem, the conservation of energy is a consequence of the fact that the laws of physics do not change over time.[3] Thus, since 1918, theorists have understood that the law of conservation of energy is the direct mathematical consequence of the translational symmetry of the quantity conjugate to energy, namely time. Joule's apparatus for measuring the mechanical equivalent of heat. A descending weight attached to a string causes a paddle immersed in water to rotate. Main article: Units of energy In 1843, Joule independently discovered the mechanical equivalent in a series of experiments. The most famous of them used the "Joule apparatus": a descending weight, attached to a string, caused rotation of a paddle immersed in water, practically insulated from heat transfer. It showed that the gravitational potential energy lost by the weight in descending was equal to the internal energy gained by the water through friction with the paddle. In the International System of Units (SI), the unit of energy is the joule, named after James Prescott Joule. It is a derived unit. It is equal to the energy expended (or work done) in applying a force of one newton through a distance of one metre. However energy is also expressed in many other units not part of the SI, such as ergs, calories, British Thermal Units, kilowatt-hours and kilocalories, which require a conversion factor when expressed in SI units. The SI unit of energy rate (energy per unit time) is the watt, which is a joule per second. Thus, one joule is one watt-second, and 3600 joules equal one watt-hour. The CGS energy unit is the erg and the imperial and US customary unit is the foot pound. Other energy units such as the electronvolt, food calorie or thermodynamic kcal (based on the temperature change of water in a heating process), and BTU are used in specific areas of science and commerce. Part of a series of articles about F → = m a → {\displaystyle {\vec {F}}=m{\vec {a}}} Second law of motion D'Alembert's principle Frame of reference Inertial frame of reference Inertia / Moment of inertia Mechanical power Mechanical work Virtual work Analytical mechanics Lagrangian mechanics Hamiltonian mechanics Routhian mechanics Hamilton–Jacobi equation Appell's equation of motion Udwadia–Kalaba equation Koopman–von Neumann mechanics Damping (ratio) Euler's laws of motion Fictitious force Harmonic oscillator Inertial / Non-inertial reference frame Mechanics of planar particle motion Motion (linear) Newton's law of universal gravitation Relative velocity Euler's equations Rotating reference frame Tangential speed Rotational speed Angular acceleration / displacement / frequency / velocity Horrocks Clairaut Daniel Bernoulli Johann Bernoulli Main articles: Mechanics, Mechanical work, and Thermodynamics In classical mechanics, energy is a conceptually and mathematically useful property, as it is a conserved quantity. Several formulations of mechanics have been developed using energy as a core concept. Work, a function of energy, is force times distance. W = ∫ C F ⋅ d s {\displaystyle W=\int _{C}\mathbf {F} \cdot \mathrm {d} \mathbf {s} } This says that the work ( W {\displaystyle W} ) is equal to the line integral of the force F along a path C; for details see the mechanical work article. Work and thus energy is frame dependent. For example, consider a ball being hit by a bat. In the center-of-mass reference frame, the bat does no work on the ball. But, in the reference frame of the person swinging the bat, considerable work is done on the ball. The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. These classical equations have remarkably direct analogs in nonrelativistic quantum mechanics.[4] Another energy-related concept is called the Lagrangian, after Joseph-Louis Lagrange. This formalism is as fundamental as the Hamiltonian, and both can be used to derive the equations of motion or be derived from them. It was invented in the context of classical mechanics, but is generally useful in modern physics. The Lagrangian is defined as the kinetic energy minus the potential energy. Usually, the Lagrange formalism is mathematically more convenient than the Hamiltonian for non-conservative systems (such as systems with friction). Noether's theorem (1918) states that any differentiable symmetry of the action of a physical system has a corresponding conservation law. Noether's theorem has become a fundamental tool of modern theoretical physics and the calculus of variations. A generalisation of the seminal formulations on constants of motion in Lagrangian and Hamiltonian mechanics (1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian; for example, dissipative systems with continuous symmetries need not have a corresponding conservation law. In the context of chemistry, energy is an attribute of a substance as a consequence of its atomic, molecular or aggregate structure. Since a chemical transformation is accompanied by a change in one or more of these kinds of structure, it is invariably accompanied by an increase or decrease of energy of the substances involved. Some energy is transferred between the surroundings and the reactants of the reaction in the form of heat or light; thus the products of a reaction may have more or less energy than the reactants. A reaction is said to be exergonic if the final state is lower on the energy scale than the initial state; in the case of endergonic reactions the situation is the reverse. Chemical reactions are invariably not possible unless the reactants surmount an energy barrier known as the activation energy. The speed of a chemical reaction (at given temperature T) is related to the activation energy E, by the Boltzmann's population factor e−E/kT – that is the probability of molecule to have energy greater than or equal to E at the given temperature T. This exponential dependence of a reaction rate on temperature is known as the Arrhenius equation.The activation energy necessary for a chemical reaction can be in the form of thermal energy. Main articles: Bioenergetics and Food energy Basic overview of energy and human life. In biology, energy is an attribute of all biological systems from the biosphere to the smallest living organism. Within an organism it is responsible for growth and development of a biological cell or an organelle of a biological organism. Energy is thus often said to be stored by cells in the structures of molecules of substances such as carbohydrates (including sugars), lipids, and proteins, which release energy when reacted with oxygen in respiration. In human terms, the human equivalent (H-e) (Human energy conversion) indicates, for a given amount of energy expenditure, the relative quantity of energy needed for human metabolism, assuming an average human energy expenditure of 12,500 kJ per day and a basal metabolic rate of 80 watts. For example, if our bodies run (on average) at 80 watts, then a light bulb running at 100 watts is running at 1.25 human equivalents (100 ÷ 80) i.e. 1.25 H-e. For a difficult task of only a few seconds' duration, a person can put out thousands of watts, many times the 746 watts in one official horsepower. For tasks lasting a few minutes, a fit human can generate perhaps 1,000 watts. For an activity that must be sustained for an hour, output drops to around 300; for an activity kept up all day, 150 watts is about the maximum.[5] The human equivalent assists understanding of energy flows in physical and biological systems by expressing energy units in human terms: it provides a "feel" for the use of a given amount of energy.[6] Sunlight's radiant energy is also captured by plants as chemical potential energy in photosynthesis, when carbon dioxide and water (two low-energy compounds) are converted into the high-energy compounds carbohydrates, lipids, and proteins. Plants also release oxygen during photosynthesis, which is utilized by living organisms as an electron acceptor, to release the energy of carbohydrates, lipids, and proteins. Release of the energy stored during photosynthesis as heat or light may be triggered suddenly by a spark, in a forest fire, or it may be made available more slowly for animal or human metabolism, when these molecules are ingested, and catabolism is triggered by enzyme action. Any living organism relies on an external source of energy – radiant energy from the Sun in the case of green plants, chemical energy in some form in the case of animals – to be able to grow and reproduce. The daily 1500–2000 Calories (6–8 MJ) recommended for a human adult are taken as a combination of oxygen and food molecules, the latter mostly carbohydrates and fats, of which glucose (C6H12O6) and stearin (C57H110O6) are convenient examples. The food molecules are oxidised to carbon dioxide and water in the mitochondria C 6 H 12 O 6 + 6 O 2 ⟶ 6 CO 2 + 6 H 2 O {\displaystyle {\ce {C6H12O6 + 6O2 -> 6CO2 + 6H2O}}} C57H110O6 + 81.5O2 → 57CO2 + 55H2O and some of the energy is used to convert ADP into ATP. ADP + HPO42− → ATP + H2O The rest of the chemical energy in O2[7] and the carbohydrate or fat is converted into heat: the ATP is used as a sort of "energy currency", and some of the chemical energy it contains is used for other metabolism when ATP reacts with OH groups and eventually splits into ADP and phosphate (at each stage of a metabolic pathway, some chemical energy is converted into heat). Only a tiny fraction of the original chemical energy is used for work:[note 2] gain in kinetic energy of a sprinter during a 100 m race: 4 kJ gain in gravitational potential energy of a 150 kg weight lifted through 2 metres: 3 kJ Daily food intake of a normal adult: 6–8 MJ It would appear that living organisms are remarkably inefficient (in the physical sense) in their use of the energy they receive (chemical or radiant energy), and it is true that most real machines manage higher efficiencies. In growing organisms the energy that is converted to heat serves a vital purpose, as it allows the organism tissue to be highly ordered with regard to the molecules it is built from. The second law of thermodynamics states that energy (and matter) tends to become more evenly spread out across the universe: to concentrate energy (or matter) in one specific place, it is necessary to spread out a greater amount of energy (as heat) across the remainder of the universe ("the surroundings").[note 3] Simpler organisms can achieve higher energy efficiencies than more complex ones, but the complex organisms can occupy ecological niches that are not available to their simpler brethren. The conversion of a portion of the chemical energy to heat at each step in a metabolic pathway is the physical reason behind the pyramid of biomass observed in ecology: to take just the first step in the food chain, of the estimated 124.7 Pg/a of carbon that is fixed by photosynthesis, 64.3 Pg/a (52%) are used for the metabolism of green plants,[8] i.e. reconverted into carbon dioxide and heat. In geology, continental drift, mountain ranges, volcanoes, and earthquakes are phenomena that can be explained in terms of energy transformations in the Earth's interior,[9] while meteorological phenomena like wind, rain, hail, snow, lightning, tornadoes and hurricanes are all a result of energy transformations brought about by solar energy on the atmosphere of the planet Earth. Sunlight may be stored as gravitational potential energy after it strikes the Earth, as (for example) water evaporates from oceans and is deposited upon mountains (where, after being released at a hydroelectric dam, it can be used to drive turbines or generators to produce electricity). Sunlight also drives many weather phenomena, save those generated by volcanic events. An example of a solar-mediated weather event is a hurricane, which occurs when large unstable areas of warm ocean, heated over months, give up some of their thermal energy suddenly to power a few days of violent air movement. In a slower process, radioactive decay of atoms in the core of the Earth releases heat. This thermal energy drives plate tectonics and may lift mountains, via orogenesis. This slow lifting represents a kind of gravitational potential energy storage of the thermal energy, which may be later released to active kinetic energy in landslides, after a triggering event. Earthquakes also release stored elastic potential energy in rocks, a store that has been produced ultimately from the same radioactive heat sources. Thus, according to present understanding, familiar events such as landslides and earthquakes release energy that has been stored as potential energy in the Earth's gravitational field or elastic strain (mechanical potential energy) in rocks. Prior to this, they represent release of energy that has been stored in heavy atoms since the collapse of long-destroyed supernova stars created these atoms. In cosmology and astronomy the phenomena of stars, nova, supernova, quasars and gamma-ray bursts are the universe's highest-output energy transformations of matter. All stellar phenomena (including solar activity) are driven by various kinds of energy transformations. Energy in such transformations is either from gravitational collapse of matter (usually molecular hydrogen) into various classes of astronomical objects (stars, black holes, etc.), or from nuclear fusion (of lighter elements, primarily hydrogen). The nuclear fusion of hydrogen in the Sun also releases another store of potential energy which was created at the time of the Big Bang. At that time, according to theory, space expanded and the universe cooled too rapidly for hydrogen to completely fuse into heavier elements. This meant that hydrogen represents a store of potential energy that can be released by fusion. Such a fusion process is triggered by heat and pressure generated from gravitational collapse of hydrogen clouds when they produce stars, and some of the fusion energy is then transformed into sunlight. Main article: Energy operator In quantum mechanics, energy is defined in terms of the energy operator as a time derivative of the wave function. The Schrödinger equation equates the energy operator to the full energy of a particle or a system. Its results can be considered as a definition of measurement of energy in quantum mechanics. The Schrödinger equation describes the space- and time-dependence of a slowly changing (non-relativistic) wave function of quantum systems. The solution of this equation for a bound system is discrete (a set of permitted states, each characterized by an energy level) which results in the concept of quanta. In the solution of the Schrödinger equation for any oscillator (vibrator) and for electromagnetic waves in a vacuum, the resulting energy states are related to the frequency by Planck's relation: E = h ν {\displaystyle E=h\nu } (where h {\displaystyle h} is Planck's constant and ν {\displaystyle \nu } the frequency). In the case of an electromagnetic wave these energy states are called quanta of light or photons. When calculating kinetic energy (work to accelerate a massive body from zero speed to some finite speed) relativistically – using Lorentz transformations instead of Newtonian mechanics – Einstein discovered an unexpected by-product of these calculations to be an energy term which does not vanish at zero speed. He called it rest energy: energy which every massive body must possess even when being at rest. The amount of energy is directly proportional to the mass of the body: E 0 = m c 2 {\displaystyle E_{0}=mc^{2}} , m is the mass of the body, c is the speed of light in vacuum, E 0 {\displaystyle E_{0}} is the rest energy. For example, consider electron–positron annihilation, in which the rest energy of these two individual particles (equivalent to their rest mass) is converted to the radiant energy of the photons produced in the process. In this system the matter and antimatter (electrons and positrons) are destroyed and changed to non-matter (the photons). However, the total mass and total energy do not change during this interaction. The photons each have no rest mass but nonetheless have radiant energy which exhibits the same inertia as did the two original particles. This is a reversible process – the inverse process is called pair creation – in which the rest mass of particles is created from the radiant energy of two (or more) annihilating photons. In general relativity, the stress–energy tensor serves as the source term for the gravitational field, in rough analogy to the way mass serves as the source term in the non-relativistic Newtonian approximation.[10] Energy and mass are manifestations of one and the same underlying physical property of a system. This property is responsible for the inertia and strength of gravitational interaction of the system ("mass manifestations"), and is also responsible for the potential ability of the system to perform work or heating ("energy manifestations"), subject to the limitations of other physical laws. In classical physics, energy is a scalar quantity, the canonical conjugate to time. In special relativity energy is also a scalar (although not a Lorentz scalar but a time component of the energy–momentum 4-vector).[10] In other words, energy is invariant with respect to rotations of space, but not invariant with respect to rotations of space-time (= boosts). Main article: Energy transformation Some forms of transfer of energy ("energy in transit") from one object or system to another Type of transfer process Heat that amount of thermal energy in transit spontaneously towards a lower-temperature object Work that amount of energy in transit due to a displacement in the direction of an applied force Transfer of material that amount of energy carried by matter that is moving from one system to another A turbo generator transforms the energy of pressurised steam into electrical energy Energy may be transformed between different forms at various efficiencies. Items that transform between these forms are called transducers. Examples of transducers include a battery, from chemical energy to electric energy; a dam: gravitational potential energy to kinetic energy of moving water (and the blades of a turbine) and ultimately to electric energy through an electric generator; or a heat engine, from heat to work. Examples of energy transformation include generating electric energy from heat energy via a steam turbine, or lifting an object against gravity using electrical energy driving a crane motor. Lifting against gravity performs mechanical work on the object and stores gravitational potential energy in the object. If the object falls to the ground, gravity does mechanical work on the object which transforms the potential energy in the gravitational field to the kinetic energy released as heat on impact with the ground. Our Sun transforms nuclear potential energy to other forms of energy; its total mass does not decrease due to that in itself (since it still contains the same total energy even if in different forms), but its mass does decrease when the energy escapes out to its surroundings, largely as radiant energy. There are strict limits to how efficiently heat can be converted into work in a cyclic process, e.g. in a heat engine, as described by Carnot's theorem and the second law of thermodynamics. However, some energy transformations can be quite efficient. The direction of transformations in energy (what kind of energy is transformed to what other kind) is often determined by entropy (equal energy spread among all available degrees of freedom) considerations. In practice all energy transformations are permitted on a small scale, but certain larger transformations are not permitted because it is statistically unlikely that energy or matter will randomly move into more concentrated forms or smaller spaces. Energy transformations in the universe over time are characterized by various kinds of potential energy that has been available since the Big Bang later being "released" (transformed to more active types of energy such as kinetic or radiant energy) when a triggering mechanism is available. Familiar examples of such processes include nuclear decay, in which energy is released that was originally "stored" in heavy isotopes (such as uranium and thorium), by nucleosynthesis, a process ultimately using the gravitational potential energy released from the gravitational collapse of supernovae, to store energy in the creation of these heavy elements before they were incorporated into the solar system and the Earth. This energy is triggered and released in nuclear fission bombs or in civil nuclear power generation. Similarly, in the case of a chemical explosion, chemical potential energy is transformed to kinetic energy and thermal energy in a very short time. Yet another example is that of a pendulum. At its highest points the kinetic energy is zero and the gravitational potential energy is at maximum. At its lowest point the kinetic energy is at maximum and is equal to the decrease of potential energy. If one (unrealistically) assumes that there is no friction or other losses, the conversion of energy between these processes would be perfect, and the pendulum would continue swinging forever. Energy is also transferred from potential energy ( E p {\displaystyle E_{p}} ) to kinetic energy ( E k {\displaystyle E_{k}} ) and then back to potential energy constantly. This is referred to as conservation of energy. In this closed system, energy cannot be created or destroyed; therefore, the initial energy and the final energy will be equal to each other. This can be demonstrated by the following: E p i + E k i = E p F + E k F {\displaystyle E_{pi}+E_{ki}=E_{pF}+E_{kF}} The equation can then be simplified further since E p = m g h {\displaystyle E_{p}=mgh} (mass times acceleration due to gravity times the height) and E k = 1 2 m v 2 {\displaystyle E_{k}={\frac {1}{2}}mv^{2}} (half mass times velocity squared). Then the total amount of energy can be found by adding E p + E k = E t o t a l {\displaystyle E_{p}+E_{k}=E_{total}} . Energy gives rise to weight when it is trapped in a system with zero momentum, where it can be weighed. It is also equivalent to mass, and this mass is always associated with it. Mass is also equivalent to a certain amount of energy, and likewise always appears associated with it, as described in mass-energy equivalence. The formula E = mc², derived by Albert Einstein (1905) quantifies the relationship between rest-mass and rest-energy within the concept of special relativity. In different theoretical frameworks, similar formulas were derived by J.J. Thomson (1881), Henri Poincaré (1900), Friedrich Hasenöhrl (1904) and others (see Mass-energy equivalence#History for further information). Part of the rest energy (equivalent to rest mass) of matter may be converted to other forms of energy (still exhibiting mass), but neither energy nor mass can be destroyed; rather, both remain constant during any process. However, since c 2 {\displaystyle c^{2}} is extremely large relative to ordinary human scales, the conversion of an everyday amount of rest mass (for example, 1 kg) from rest energy to other forms of energy (such as kinetic energy, thermal energy, or the radiant energy carried by light and other radiation) can liberate tremendous amounts of energy (~ 9 × 10 16 {\displaystyle 9\times 10^{16}} joules = 21 megatons of TNT), as can be seen in nuclear reactors and nuclear weapons. Conversely, the mass equivalent of an everyday amount energy is minuscule, which is why a loss of energy (loss of mass) from most systems is difficult to measure on a weighing scale, unless the energy loss is very large. Examples of large transformations between rest energy (of matter) and other forms of energy (e.g., kinetic energy into particles with rest mass) are found in nuclear physics and particle physics. Thermodynamics divides energy transformation into two kinds: reversible processes and irreversible processes. An irreversible process is one in which energy is dissipated (spread) into empty energy states available in a volume, from which it cannot be recovered into more concentrated forms (fewer quantum states), without degradation of even more energy. A reversible process is one in which this sort of dissipation does not happen. For example, conversion of energy from one type of potential field to another, is reversible, as in the pendulum system described above. In processes where heat is generated, quantum states of lower energy, present as possible excitations in fields between atoms, act as a reservoir for part of the energy, from which it cannot be recovered, in order to be converted with 100% efficiency into other forms of energy. In this case, the energy must partly stay as heat, and cannot be completely recovered as usable energy, except at the price of an increase in some other kind of heat-like increase in disorder in quantum states, in the universe (such as an expansion of matter, or a randomisation in a crystal). As the universe evolves in time, more and more of its energy becomes trapped in irreversible states (i.e., as heat or other kinds of increases in disorder). This has been referred to as the inevitable thermodynamic heat death of the universe. In this heat death the energy of the universe does not change, but the fraction of energy which is available to do work through a heat engine, or be transformed to other usable forms of energy (through the use of generators attached to heat engines), grows less and less. Main article: Conservation of energy The fact that energy can be neither created nor be destroyed is called the law of conservation of energy. In the form of the first law of thermodynamics, this states that a closed system's energy is constant unless energy is transferred in or out by work or heat, and that no energy is lost in transfer. The total inflow of energy into a system must equal the total outflow of energy from the system, plus the change in the energy contained within the system. Whenever one measures (or calculates) the total energy of a system of particles whose interactions do not depend explicitly on time, it is found that the total energy of the system always remains constant.[11] While heat can always be fully converted into work in a reversible isothermal expansion of an ideal gas, for cyclic processes of practical interest in heat engines the second law of thermodynamics states that the system doing work always loses some energy as waste heat. This creates a limit to the amount of heat energy that can do work in a cyclic process, a limit called the available energy. Mechanical and other forms of energy can be transformed in the other direction into thermal energy without such limitations.[12] The total energy of a system can be calculated by adding up all forms of energy in the system. Richard Feynman said during a 1961 lecture:[13] There is a fact, or if you wish, a law, governing all natural phenomena that are known to date. There is no known exception to this law – it is exact so far as we know. The law is called the conservation of energy. It states that there is a certain quantity, which we call energy, that does not change in manifold changes which nature undergoes. That is a most abstract idea, because it is a mathematical principle; it says that there is a numerical quantity which does not change when something happens. It is not a description of a mechanism, or anything concrete; it is just a strange fact that we can calculate some number and when we finish watching nature go through her tricks and calculate the number again, it is the same. —  The Feynman Lectures on Physics Most kinds of energy (with gravitational energy being a notable exception)[14] are subject to strict local conservation laws as well. In this case, energy can only be exchanged between adjacent regions of space, and all observers agree as to the volumetric density of energy in any given space. There is also a global law of conservation of energy, stating that the total energy of the universe cannot change; this is a corollary of the local law, but not vice versa.[12][13] This law is a fundamental principle of physics. As shown rigorously by Noether's theorem, the conservation of energy is a mathematical consequence of translational symmetry of time,[15] a property of most phenomena below the cosmic scale that makes them independent of their locations on the time coordinate. Put differently, yesterday, today, and tomorrow are physically indistinguishable. This is because energy is the quantity which is canonical conjugate to time. This mathematical entanglement of energy and time also results in the uncertainty principle - it is impossible to define the exact amount of energy during any definite time interval. The uncertainty principle should not be confused with energy conservation - rather it provides mathematical limits to which energy can in principle be defined and measured. Each of the basic forces of nature is associated with a different type of potential energy, and all types of potential energy (like all other types of energy) appears as system mass, whenever present. For example, a compressed spring will be slightly more massive than before it was compressed. Likewise, whenever energy is transferred between systems by any mechanism, an associated mass is transferred with it. In quantum mechanics energy is expressed using the Hamiltonian operator. On any time scales, the uncertainty in the energy is by Δ E Δ t ≥ ℏ 2 {\displaystyle \Delta E\Delta t\geq {\frac {\hbar }{2}}} which is similar in form to the Heisenberg Uncertainty Principle (but not really mathematically equivalent thereto, since H and t are not dynamically conjugate variables, neither in classical nor in quantum mechanics). In particle physics, this inequality permits a qualitative understanding of virtual particles which carry momentum, exchange by which and with real particles, is responsible for the creation of all known fundamental forces (more accurately known as fundamental interactions). Virtual photons (which are simply lowest quantum mechanical energy state of photons) are also responsible for electrostatic interaction between electric charges (which results in Coulomb law), for spontaneous radiative decay of exited atomic and nuclear states, for the Casimir force, for van der Waals bond forces and some other observable phenomena. Energy transfer can be considered for the special case of systems which are closed to transfers of matter. The portion of the energy which is transferred by conservative forces over a distance is measured as the work the source system does on the receiving system. The portion of the energy which does not do work during the transfer is called heat.[note 4] Energy can be transferred between systems in a variety of ways. Examples include the transmission of electromagnetic energy via photons, physical collisions which transfer kinetic energy,[note 5] and the conductive transfer of thermal energy. Energy is strictly conserved and is also locally conserved wherever it can be defined. In thermodynamics, for closed systems, the process of energy transfer is described by the first law:[note 6] Δ E = W + Q {\displaystyle \Delta {}E=W+Q} where E {\displaystyle E} is the amount of energy transferred, W {\displaystyle W} represents the work done on the system, and Q {\displaystyle Q} represents the heat flow into the system. As a simplification, the heat term, Q {\displaystyle Q} , is sometimes ignored, especially when the thermal efficiency of the transfer is high. Δ E = W {\displaystyle \Delta {}E=W} This simplified equation is the one used to define the joule, for example. Beyond the constraints of closed systems, open systems can gain or lose energy in association with matter transfer (both of these process are illustrated by fueling an auto, a system which gains in energy thereby, without addition of either work or heat). Denoting this energy by E {\displaystyle E} , one may write Δ E = W + Q + E . {\displaystyle \Delta {}E=W+Q+E.} Internal energy is the sum of all microscopic forms of energy of a system. It is the energy needed to create the system. It is related to the potential energy, e.g., molecular structure, crystal structure, and other geometric aspects, as well as the motion of the particles, in form of kinetic energy. Thermodynamics is chiefly concerned with changes in internal energy and not its absolute value, which is impossible to determine with thermodynamics alone.[16] The first law of thermodynamics asserts that energy (but not necessarily thermodynamic free energy) is always conserved[17] and that heat flow is a form of energy transfer. For homogeneous systems, with a well-defined temperature and pressure, a commonly used corollary of the first law is that, for a system subject only to pressure forces and heat transfer (e.g., a cylinder-full of gas) without chemical changes, the differential change in the internal energy of the system (with a gain in energy signified by a positive quantity) is given as d E = T d S − P d V {\displaystyle \mathrm {d} E=T\mathrm {d} S-P\mathrm {d} V\,} , where the first term on the right is the heat transferred into the system, expressed in terms of temperature T and entropy S (in which entropy increases and the change dS is positive when the system is heated), and the last term on the right hand side is identified as work done on the system, where pressure is P and volume V (the negative sign results since compression of the system requires work to be done on it and so the volume change, dV, is negative when work is done on the system). This equation is highly specific, ignoring all chemical, electrical, nuclear, and gravitational forces, effects such as advection of any form of energy other than heat and pV-work. The general formulation of the first law (i.e., conservation of energy) is valid even in situations in which the system is not homogeneous. For these cases the change in internal energy of a closed system is expressed in a general form by d E = δ Q + δ W {\displaystyle \mathrm {d} E=\delta Q+\delta W} where δ Q {\displaystyle \delta Q} is the heat supplied to the system and δ W {\displaystyle \delta W} is the work applied to the system. The energy of a mechanical harmonic oscillator (a mass on a spring) is alternatively kinetic and potential. At two points in the oscillation cycle it is entirely kinetic, and at two points it is entirely potential. Over the whole cycle, or over many cycles, net energy is thus equally split between kinetic and potential. This is called equipartition principle; total energy of a system with many degrees of freedom is equally split among all available degrees of freedom. This principle is vitally important to understanding the behaviour of a quantity closely related to energy, called entropy. Entropy is a measure of evenness of a distribution of energy between parts of a system. When an isolated system is given more degrees of freedom (i.e., given new available energy states that are the same as existing states), then total energy spreads over all available degrees equally without distinction between "new" and "old" degrees. This mathematical result is called the second law of thermodynamics. The second law of thermodynamics is valid only for systems which are near or in equilibrium state. For non-equilibrium systems, the laws governing system's behavior are still debatable. One of the guiding principles for these systems is the principle of maximum entropy production.[18][19] It states that nonequilibrium systems behave in such a way to maximize its entropy production.[20] Book: Energy Energy portal Index of energy articles Index of wave articles Mattergy Orders of magnitude (energy) ^ The second law of thermodynamics imposes limitations on the capacity of a system to transfer energy by performing work, since some of the system's energy might necessarily be consumed in the form of heat instead. See e.g. Lehrman, Robert L. (1973). "Energy Is Not The Ability To Do Work". The Physics Teacher. 11 (1): 15–18. Bibcode:1973PhTea..11...15L. doi:10.1119/1.2349846. ISSN 0031-921X. ^ These examples are solely for illustration, as it is not the energy available for work which limits the performance of the athlete but the power output of the sprinter and the force of the weightlifter. A worker stacking shelves in a supermarket does more work (in the physical sense) than either of the athletes, but does it more slowly. ^ Crystals are another example of highly ordered systems that exist in nature: in this case too, the order is associated with the transfer of a large amount of heat (known as the lattice energy) to the surroundings. ^ Although heat is "wasted" energy for a specific energy transfer,(see: waste heat) it can often be harnessed to do useful work in subsequent interactions. However, the maximum energy that can be "recycled" from such recovery processes is limited by the second law of thermodynamics. ^ The mechanism for most macroscopic physical collisions is actually electromagnetic, but it is very common to simplify the interaction by ignoring the mechanism of collision and just calculate the beginning and end result. ^ There are several sign conventions for this equation. Here, the signs in this equation follow the IUPAC convention. ^ Harper, Douglas. "Energy". Online Etymology Dictionary. Archived from the original on October 11, 2007. Retrieved May 1, 2007. ^ Smith, Crosbie (1998). The Science of Energy – a Cultural History of Energy Physics in Victorian Britain. The University of Chicago Press. ISBN 978-0-226-76420-7. ^ Lofts, G; O'Keeffe D; et al. (2004). "11 – Mechanical Interactions". Jacaranda Physics 1 (2 ed.). Milton, Queensland, Australia: John Willey & Sons Australia Ltd. p. 286. ISBN 978-0-7016-3777-4. ^ The Hamiltonian MIT OpenCourseWare website 18.013A Chapter 16.3 Accessed February 2007 ^ "Retrieved on May-29-09". Uic.edu. Archived from the original on 2010-06-04. Retrieved 2010-12-12. ^ Bicycle calculator – speed, weight, wattage etc. "Bike Calculator". Archived from the original on 2009-05-13. Retrieved 2009-05-29. . ^ Schmidt-Rohr, K (2015). "Why Combustions Are Always Exothermic, Yielding About 418 kJ per Mole of O2". J. Chem. Educ. 92 (12): 2094–2099. Bibcode:2015JChEd..92.2094S. doi:10.1021/acs.jchemed.5b00333. ^ Ito, Akihito; Oikawa, Takehisa (2004). "Global Mapping of Terrestrial Primary Productivity and Light-Use Efficiency with a Process-Based Model. Archived 2006-10-02 at the Wayback Machine" in Shiyomi, M. et al. (Eds.) Global Environmental Change in the Ocean and on Land. pp. 343–58. ^ "Earth's Energy Budget". Okfirst.ocs.ou.edu. Archived from the original on 2008-08-27. Retrieved 2010-12-12. ^ a b Misner, Thorne, Wheeler (1973). Gravitation. San Francisco: W.H. Freeman. ISBN 978-0-7167-0344-0. CS1 maint: Multiple names: authors list (link) ^ Berkeley Physics Course Volume 1. Charles Kittel, Walter D Knight and Malvin A Ruderman ^ a b The Laws of Thermodynamics Archived 2006-12-15 at the Wayback Machine including careful definitions of energy, free energy, et cetera. ^ a b Feynman, Richard (1964). The Feynman Lectures on Physics; Volume 1. U.S.A: Addison Wesley. ISBN 978-0-201-02115-8. ^ "E. Noether's Discovery of the Deep Connection Between Symmetries and Conservation Laws". Physics.ucla.edu. 1918-07-16. Archived from the original on 2011-05-14. Retrieved 2010-12-12. ^ "Time Invariance". Ptolemy.eecs.berkeley.edu. Archived from the original on 2011-07-17. Retrieved 2010-12-12. ^ I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p.39 ^ Kittel and Kroemer (1980). Thermal Physics. New York: W.H. Freeman. ISBN 978-0-7167-1088-2. ^ Onsager, L. (1931). "Reciprocal relations in irreversible processes". Phys. Rev. 37 (4): 405–26. Bibcode:1931PhRv...37..405O. doi:10.1103/PhysRev.37.405. ^ Martyushev, L.M.; Seleznev, V.D. (2006). "Maximum entropy production principle in physics, chemistry and biology". Phys. Rev. 426 (1): 1–45. Bibcode:2006PhR...426....1M. doi:10.1016/j.physrep.2005.12.001. ^ Belkin, A.; et., al. (2015). "Self-Assembled Wiggling Nano-Structures and the Principle of Maximum Entropy Production". Sci. Rep. 5: 8323. Bibcode:2015NatSR...5E8323B. doi:10.1038/srep08323. PMC 4321171. PMID 25662746. Alekseev, G.N. (1986). Energy and Entropy. Moscow: Mir Publishers. The Biosphere (A Scientific American Book), San Francisco, W.H. Freeman and Co., 1970, ISBN 0-7167-0945-7. This book, originally a 1970 Scientific American issue, covers virtually every major concern and concept since debated regarding materials and energy resources, population trends, and environmental degradation. Crowell, Benjamin (2011), "ch. 11", Light and Matter, Fullerton, California: Light and Matter Energy and Power (A Scientific American Book), San Francisco, W.H. Freeman and Co., 1971, ISBN 0-7167-0938-4. Ross, John S. (23 April 2002). "Work, Power, Kinetic Energy" (PDF). Project PHYSNET. Michigan State University. Santos, Gildo M. "Energy in Brazil: a historical overview," The Journal of Energy History (2018) $1 online Smil, Vaclav (2008). Energy in nature and society: general energetics of complex systems. Cambridge, US: MIT Press. ISBN 978-0-262-19565-2. Walding, Richard; Rapkins, Greg; Rossiter, Glenn (1999). New Century Senior Physics. Melbourne, Australia: Oxford University Press. ISBN 978-0-19-551084-3. The Journal of Energy History / Revue d'histoire de l'énergie (JEHRHE), 2018- Energyat Wikipedia's sister projects Energy at Curlie Differences between Heat and Thermal energy – BioCab Outline of energy Energy transformation Energy condition Negative mass Mass–energy equivalence Quantum thermodynamics Laws of thermodynamics Thermodynamic system Thermodynamic state Thermodynamic potential Thermodynamic free energy Irreversible process Thermal reservoir Heat capacity Volume (thermodynamics) Thermal equilibrium Thermodynamic temperature Isolated system Free entropy Entropic force Negentropy Exergy Enthalpy Interatomic potential Gravitational binding energy Chromodynamic Sound energy Mechanical wave Vacuum energy Energy carriers Primary energy Natural uranium Gravitational energy Fossil fuel power station Integrated gasification combined cycle Radioisotope thermoelectric generator Photovoltaic system Concentrated solar power Solar thermal energy Solar power tower Solar furnace Wave farm Tidal power Use and Efficient energy use Worldwide energy supply Jevons paradox History (geological) Gaia hypothesis Atmosphere (Earth) Origin (abiogenesis) Evolutionary history Biology (astrobiology) Pollution / quality Ambient standards (USA) Clean Air Act (USA) Ozone depletion Airshed Deforestation (REDD) Fossil fuels (peak oil) peak farmland Degradation Bioprospecting Bushfood genetic resources Gene bank Herbalist plants Non-timber forest products Rangeland Types / location storage and recovery bergs Peak water tragedy of overexploitation Earth Overshoot Day Common-pool Conflict (perpetuation) Renewable / Non-renewable
CommonCrawl
Can quantum randomness be somehow explained by classical uncertainty? [closed] In quantum mechanics, the outcome of each measurement is random, distributed according to the squared amplitude of the wave function obtained from the Schrodinger's equation. Now, can someone suggest that QM measurement outcomes are just produced by a deterministic, real, local process (or field) in the 3-D space, varying because of uncontrollable phenomena like noise? (An example of such a classical field can be a real 3-D coherently rotating vector field accounting for the Schrödinger equation and electron spin in a consistent manner, which carries distributed angular momentum and energy in the same way as a circularly polarized electromagnetic wave). Setting aside Bell's theorem which forbids such an explanation and any other local realistic theory, will such a classic explanation ever be able to reproduce the probabilities obtained from QM? I have seen that deterministic QM interpretations like Bohmian mechanics introduce concepts like infinite dimensional configuration space. Why is this necessary for reproducing QM? quantum-mechanics wavefunction quantum-interpretations bells-inequality bohmian-mechanics edited Feb 5 at 4:49 Ali Lavasani asked Feb 5 at 4:45 Ali LavasaniAli Lavasani closed as off-topic by WillO, Aaron Stevens, ZeroTheHero, Jon Custer, user191954 Feb 8 at 15:29 "We deal with mainstream physics here. Questions about the general correctness of unpublished personal theories are off topic, although specific questions evaluating new theories in the context of established science are usually allowed. For more information, see Is non mainstream physics appropriate for this site?." – WillO, Aaron Stevens, ZeroTheHero, Jon Custer, Community $\begingroup$ square root of the wavefunction (which is complex)? Do you mean modulus square? $\endgroup$ – ZeroTheHero Feb 5 at 4:48 $\begingroup$ @ZeroTheHero Oh yes, I corrected it. Thanks for reminding. $\endgroup$ – Ali Lavasani Feb 5 at 4:48 $\begingroup$ If you "set aside Bell's theorem", you're no longer in the realm of mainstream physics, and therefore outside the realm of this site. $\endgroup$ – G. Smith Feb 5 at 5:40 $\begingroup$ @G. Smith I want to know why classical deterministic explanations cannot reproduce QM predictions. This is not about Bell's theorem, which says that local realism doesn't exist even in a random way. Prior to Bell's theorem, many people like Einstein and Bohm tried to offer some "hidden variable" explanation, but they never considered a simple classic probability theory. I want to know what strictly rules such theories out. I mentioned that in order not to get the answer "Because of Bell's theorem". $\endgroup$ – Ali Lavasani Feb 5 at 6:03 $\begingroup$ You should look into Stochastic Electrodynamics (SE), which reproduces most (if not all) Quantum Mechanics predictions from a random classical field theory. Contrary to what many people may say here, SE is a main stream physical theory (maybe not very well known), and it's still under development today. It's a kind of hidden variables theory (but not of the usual type). I don't know its current status relative to Bell's theorem. See this article: en.wikipedia.org/wiki/Stochastic_electrodynamics $\endgroup$ – Cham Feb 5 at 15:20 There are several serious obstructions. One is provided by Bell's analysis concerning the conflict between realism and locality. However that obstruction concerns a very peculiar situation, referring to a bipartite system, with parts causally separated, and quantum entangled states. There is another no-go result, usually called Kochen-Specker theorem, leading to a very severe obstruction against any completely classical interpretation of Quantum Mechanics based on hidden variables and epistemic randomness (however Bohmian quantum mechanics is untouched by it). Actually this theorem exists into a number of versions and its origin can be traced back to the celebrated Gleason's theorem as observed by Bell himself in his second famous paper of 1966 preceding the paper by Kochen and Specker of 1967. The basic idea underpinning the no-go result is that quantum observables $A$ (selfadjoint operators on the Hilbert space of the system) are actually classical variables and there is a classical hidden state $\lambda$ (a set of hidden classical variables $\lambda \in \Lambda$) which fixes the values $v_\lambda(A) \in \mathbb R$ of every observable $A$. In this view, randomness of values attained by measurements of quantum observables is explained by assuming that $\lambda$ is unknown, but we know only a probability distribution $\mu$ over $\Lambda$ describing the probability that $\lambda$ attains some value (discrete distribution) or stay in some "continuous" set. This is what happens, for instance, in classical statistical mechanics. Here quantum probability becomes epistemic instead of ontic as in the standard interpretation of QM. In other words there must exist some correspondence $\mu \leftrightarrow |\psi \rangle $ such that $$\langle \psi| A \psi \rangle = \int_{\Lambda} v_\lambda(A) d\mu(\lambda)\:.$$ It remains to fix general rules to associate sharp values $v_\lambda(A)$ to observables $A$. The problem is how one should deal with functional relations as $C=A+B$. The naive idea to always assume that $v_\lambda(C) = v_\lambda(A) + v_\lambda(B)$ turns out to be untenable when $A$ and $B$ are quantistically described as incompatible observables as explained by Bell analysing an earlier no-go theorem by von Neumann in 1966. A fair set of assumptions for $A \mapsto v_\lambda(A)$, which avoids to tackle any classical interpretation of quantum incompatibility, was proposed by Kochen and Specker referring to the algebra of observables $B(\cal H)_{sa}$ over a finite-dimensional Hilbert space $\cal H$ (finite dimensionality requirement can be relaxed by assuming some suitable continuity requirement on $v_\lambda$). (1) The map $v_\lambda : B({\cal H})_{sa} \ni A \mapsto v_\lambda(A) \in \mathbb R $ is non-trivial (not all values are $0$). (2) If $A,B \in B(\cal H)_{sa}$ are compatible observables (i.e. they commute), then $v_{\lambda}(A+B) = v_\lambda(A)+ v_\lambda(B)$. (2) If $A,B \in B(\cal H)_{sa}$ are compatible observables (i.e. they commute), then $v_{\lambda}(AB) = v_\lambda(A)v_\lambda(B)$. A more precise theory would also fix how the map $v_\lambda$ deals with incompatible observables. This thechnical specification is not necessary for producing the no-go result I go to state and this fact also shows how KS' result is powerful. Kochen-Specker Theorem If $3\leq \dim(\cal H) < +\infty$, then there is no map $v_\lambda : B(\cal H)_{sa} \ni A \mapsto v_\lambda(A) \in \mathbb R $ satisfying requirements (1), (2), (3). This theorem rules out from scratch every classical interpretation of QM where the realism hypothesis, i.e. every quantum observable is actually classical and always has an (unknown) sharp value. All that before any attempt to explain quantum randomness in terms of some classical uncertainty. Actually, a closer scrutiny shows that there is a way out when assuming the contextuality requirement: that the always existing values $v_\lambda(A)$ depend also on which observable $B$ I measure together with $A$ ($B$ is therefore assumed to be compatible with $A$). It may happens that $v_\lambda(A|B)\neq v_\lambda(A|B')$ if $B$ and $B'$ are incompatible (compatibility is not a transitive relation!). This impervious approach seems to be logically consistent even if it requires a big revision of our classical ideas on the physical world (personally I definitely prefer the standard interpretation of QM!). The result of Kochen and Specker rules out realistic non-contextual hidden-variable interpretations of quantum theory. There is an equivalent formulation of K-S theorem which is more suitable for experiments. It is based on the notion of test. A test is an observable which can assume only the value $0$ or $1$, in the standard formalism tests are all of orthogonal projectors $P \in B(\cal H)_{sa}$. If $3\leq \dim(\cal H) < +\infty$, then there is a set $\cal P$ of tests such that there is no map $v_\lambda: {\cal P} \ni P \to \{0,1\}$ satisfying the following requirements (1) If $P,P' \in \cal P$ are compatible mutually exclusive tests ($PP'=0$ as orthogonal prjectors), then at most one of $v_\lambda(P)$, $v_\lambda(P')$ does not vanish. (2) If $P_1,\ldots, P_n \in \cal P$ is a set of pairwise compatible and mutually exclusive tests such that $P_1+\ldots + P_n =I$, then one of $v_\lambda(P_k)$ does not vanish. The original proof of KS theorem in 1967 proved that if $\dim(\cal H)=3$ there is a set of 117 tests satisfying the theorem. Actually a general proof valid for every dimension (also infinite when assuming some continuity hypothesis on $v_\lambda$) easily arises from Gleason's theorem as already noticed by Bell. I have seen that deterministic QM interpretations like Bohmian mechanics introduce concepts like infinite dimensional configuration space. I do not think so. Bohmian mechanics for particles is formulated in the standard $3N$ dimensional configuration space of a system of $N$ particle. Maybe you are considering the system of a quantum field. I am not an expert on this subject however. As recent references I would like to mention various entries of Stanford Encyclopedia of Philosophy, Landsman's book on foundations of quantum theory, a book consisting of a wide collections of recent papers on Bell's analysis and further foundational issues. (I am publishing a book on fundamental mathematical structures in quantum theoryand chapter 5 is completely devoted to study these issues including Bell's inequality and its interplay with locality and contextuality). My answer here could be of interest edited Feb 5 at 18:02 answered Feb 5 at 9:42 Valter MorettiValter Moretti $\begingroup$ Thanks for your answer and additional references (including your very excellent new book). Note that the Landsman book you cite is open access, and Springer explicitly says so on its copyright page. A free and perfectly legal pdf is downloadable from researchgate.net/publication/… $\endgroup$ – John Forkosh Feb 5 at 13:06 $\begingroup$ Thank you for the piece of information about Klaas's book. I do not know if my book is excellent! Thank you however :) $\endgroup$ – Valter Moretti Feb 5 at 13:47 Bell's theorem is a "no-go theorem" that draws an important distinction between quantum mechanics and the world as described by classical mechanics, particularly concerning quantum entanglement where two or more particles in a quantum state continue to be mutually dependent, even at large physical separations. Bell's theorem states that any physical theory that incorporates local realism cannot reproduce all the predictions of quantum mechanical theory. Because numerous experiments agree with the predictions of quantum mechanical theory, and show differences between correlations that could not be explained by local hidden variables, the experimental results have been taken by many as refuting the concept of local realism as an explanation of the physical phenomena under test. For a hidden variable theory, if Bell's conditions are correct, the results that agree with quantum mechanical theory appear to indicate superluminal (faster-than-light) effects, in contradiction to the principle of locality. (Currently accepted quantum field theories are local in the terminology of the Lagrangian formalism and axiomatic approach.) The "no go" means that all the data fitted with quantum mechanical models and thus validate quantum mechanics, cannot be fitted with classical theories if locality is assumed in the mathematical model. Locality is a principle in both classical and quantum physics, principles are axioms for physics models. n physics, the principle of locality states that an object is directly influenced only by its immediate surroundings. A theory which includes the principle of locality is said to be a "local theory". This is an alternative to the older concept of instantaneous "action at a distance". Locality evolved out of the field theories of classical physics. The concept is that for an action at one point to have an influence at another point, something in the space between those points such as a field must mediate the action. To exert an influence, something, such as a wave or particle, must travel through the space between the two points, carrying the influence. So the answer is no, you cannot ignore Bell's theorem within main stream physics which is what this site discusses. anna vanna v $\begingroup$ What I have heard is that the quantum probability is obtained from Born's rule, which is psi-squared. A classical uncontrollable (and unpredictable in practice, not in principle) factor such as noise or chaos cannot produce probability distributions we get from Born's rule, for example such a classical distribution would be Gaussian. Is this correct? This doesn't have anything to do with Bell's theorem. $\endgroup$ – Ali Lavasani Feb 5 at 6:56 $\begingroup$ QM probability is $Ψ*Ψ$ , the complex conjugate $Ψ*$ squared with the $Ψ$.. Bohm's model does reproduce non relativistic QM, that is why it is called an interpretation of quantum mechanics, because it is local.. The statistical argument does not suffice for complicated models aiming at finding a classical physics explanation of quantum mechanics. locality does. look at deterministic proposals en.wikipedia.org/wiki/… $\endgroup$ – anna v Feb 5 at 7:52 $\begingroup$ en.wikipedia.org/wiki/Hidden-variable_theory . Bells theorem constrains them to be non local $\endgroup$ – anna v Feb 5 at 7:54 Not the answer you're looking for? Browse other questions tagged quantum-mechanics wavefunction quantum-interpretations bells-inequality bohmian-mechanics or ask your own question. Why is the application of probability in QM fundamentally different from application of probability in other areas? Why do people rule out local hidden variables? Bohmian loophole in PBR-like theorems Couder-Fort Oil Bath Experiments and Quantum Entanglement Phenomena Determinism loophole? Why was quantum mechanics regarded as a non-deterministic theory? Does valid interpretations of quantum mechanics always reduce to trivial arguments about the equations? Is the wavefunction unique to the observer? Is Bohmian mechanics really incompatible with relativity? Does Bohmian mechanics really solve the measurement problem? Can particles popped into existence from the vacuum have electromagnetic effects on other particles?
CommonCrawl
American Institute of Mathematical Sciences Journal Prices Book Prices/Order Proceeding Prices E-journal Policy Optimal control of dynamical systems with polynomial impulses DCDS Home Necessary conditions for a weak minimum in optimal control problems with integral equations on a variable time interval September 2015, 35(9): 4345-4366. doi: 10.3934/dcds.2015.35.4345 Integral representations for bracket-generating multi-flows Ermal Feleqi 1, and Franco Rampazzo 1, Dipartimento di Matematica, Università degli Studi di Padova, Via Trieste 63 - 35121 - Padova (PD), Italy, Italy Received May 2014 Revised September 2014 Published April 2015 If $f_1,f_2$ are smooth vector fields on an open subset of an Euclidean space and $[f_1,f_2]$ is their Lie bracket, the asymptotic formula \begin{equation}\label{abstract:EQ} \Psi_{[f_1,f_2]}(t_1,t_2)(x) - x =t_1t_2 [f_1,f_2](x) +o(t_1t_2), \, (1) \end{equation} where we have set $\Psi_{[f_1,f_2]}(t_1,t_2)(x) \overset{\underset{\mathrm{def}}{}}{=} \exp(-t_2 f_2)\circ \exp(-t_1f_1) \circ \exp(t_2f_2) \circ \exp(t_1f_1)(x)$, is valid for all $t_1,t_2$ small enough. In fact, the integral, exact formula \begin{equation}\label{abstract:EQ} \Psi_{[f_1,f_2]}(t_1,t_2)(x) - x = \int_0^{t_1}\int_0^{t_2}[f_1,f_2]^{(s_2,s_1)} (\Psi(t_1,s_2)(x))ds_1\,ds_2 , (2) \end{equation} where $[f_1,f_2]^{(s_2,s_1)}(y) \overset{\underset{\mathrm{def}}{}}{=} D (\exp(s_1f_1) \circ \exp(s_2f_2)))^{-1}(y) \cdot [f_1,f_2](\exp (s_1f_1) \circ \exp(s_2f_2)(y) ), $ has also been proven. Of course (2) can be regarded as an improvement of (1). In this paper we show that an integral representation like (2) holds true for any iterated Lie bracket made of elements of a family ${f_1,\dots,f_m}$ of vector fields. In perspective, these integral representations might lie at the basis for extensions of asymptotic formulas involving non-smooth vector fields. Keywords: asymptotic formulas, Chow's theorem, integral formulas, multi-flows, Iterated Lie brackets, low smoothness hypotheses. Mathematics Subject Classification: Primary: 34A26, 34H05; Secondary: 93B0. Citation: Ermal Feleqi, Franco Rampazzo. Integral representations for bracket-generating multi-flows. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4345-4366. doi: 10.3934/dcds.2015.35.4345 A. A. Agračev and R. V. Gamkrelidze, Exponential representation of flows and a chronological enumeration,, Mat. Sb. (N.S.), 107 (1978), 467. Google Scholar A. A. Agračev and R. V. Gamkrelidze, Chronological algebras and nonstationary vector fields,, in Problems in geometry, 11 (1980), 135. Google Scholar M. Bramanti, L. Brandolini and M. Pedroni, Basic properties of nonsmooth Hörmander's vector fields and Poincaré's inequality,, Forum Math., 25 (2013), 703. doi: 10.1515/form.2011.133. Google Scholar A. Montanari and D. Morbidelli, Nonsmooth Hörmander vector fields and their control balls,, Trans. Amer. Math. Soc., 364 (2012), 2339. doi: 10.1090/S0002-9947-2011-05395-X. Google Scholar A. Montanari and D. Morbidelli, Almost exponential maps and integrability results for a class of horizontally regular vector fields,, Potential Anal., 38 (2013), 611. doi: 10.1007/s11118-012-9289-6. Google Scholar A. Montanari and D. Morbidelli, Step-$s$ involutive families of vector fields, their orbits and the Poincaré inequality,, J. Math. Pures Appl. (9), 99 (2013), 375. doi: 10.1016/j.matpur.2012.09.005. Google Scholar A. Montanari and D. Morbidelli, Generalized Jacobi identities and ball-box theorem for horizontally regular vector fields,, J. Geom. Anal., 24 (2014), 687. doi: 10.1007/s12220-012-9351-z. Google Scholar F. Rampazzo and H. J. Sussmann, Set-valued differentials and a nonsmooth version of Chow-Rashevski's theorem,, in Proceedings of the 40th IEEE Conference on Decision and Control, (2001), 2613. Google Scholar F. Rampazzo and H. J. Sussmann, Commutators of flow maps of nonsmooth vector fields,, J. Differential Equations, 232 (2007), 134. doi: 10.1016/j.jde.2006.04.016. Google Scholar E. T. Sawyer and R. L. Wheeden, Hölder continuity of weak solutions to subelliptic equations with rough coefficients,, Mem. Amer. Math. Soc., 180 (2006). doi: 10.1090/memo/0847. Google Scholar Linh V. Nguyen. A family of inversion formulas in thermoacoustic tomography. Inverse Problems & Imaging, 2009, 3 (4) : 649-675. doi: 10.3934/ipi.2009.3.649 Jérôme Rousseau, Paulo Varandas, Yun Zhao. Entropy formulas for dynamical systems with mistakes. Discrete & Continuous Dynamical Systems - A, 2012, 32 (12) : 4391-4407. doi: 10.3934/dcds.2012.32.4391 Roderick S. C. Wong, H. Y. Zhang. On the connection formulas of the third Painlevé transcendent. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 541-560. doi: 10.3934/dcds.2009.23.541 J. C. Alvarez Paiva and E. Fernandes. Crofton formulas in projective Finsler spaces. Electronic Research Announcements, 1998, 4: 91-100. Matthew B. Rudd. Statistical exponential formulas for homogeneous diffusion. Communications on Pure & Applied Analysis, 2015, 14 (1) : 269-284. doi: 10.3934/cpaa.2015.14.269 Dmitry Kleinbock, Barak Weiss. Dirichlet's theorem on diophantine approximation and homogeneous flows. Journal of Modern Dynamics, 2008, 2 (1) : 43-62. doi: 10.3934/jmd.2008.2.43 Zvi Drezner, Carlton Scott. Approximate and exact formulas for the $(Q,r)$ inventory model. Journal of Industrial & Management Optimization, 2015, 11 (1) : 135-144. doi: 10.3934/jimo.2015.11.135 Janusz Mierczyński, Wenxian Shen. Formulas for generalized principal Lyapunov exponent for parabolic PDEs. Discrete & Continuous Dynamical Systems - S, 2016, 9 (4) : 1189-1199. doi: 10.3934/dcdss.2016048 Paul Loya and Jinsung Park. On gluing formulas for the spectral invariants of Dirac type operators. Electronic Research Announcements, 2005, 11: 1-11. Francis N. Castro, Carlos Corrada-Bravo, Natalia Pacheco-Tallaj, Ivelisse Rubio. Explicit formulas for monomial involutions over finite fields. Advances in Mathematics of Communications, 2017, 11 (2) : 301-306. doi: 10.3934/amc.2017022 Cuilian You, Le Bo. Option pricing formulas for generalized fuzzy stock model. Journal of Industrial & Management Optimization, 2020, 16 (1) : 387-396. doi: 10.3934/jimo.2018158 João Paulo da Silva, Julio López, Ricardo Dahab. Isogeny formulas for Jacobi intersection and twisted hessian curves. Advances in Mathematics of Communications, 2019, 0 (0) : 0-0. doi: 10.3934/amc.2020048 Matilde Martínez, Shigenori Matsumoto, Alberto Verjovsky. Horocycle flows for laminations by hyperbolic Riemann surfaces and Hedlund's theorem. Journal of Modern Dynamics, 2016, 10: 113-134. doi: 10.3934/jmd.2016.10.113 Giulia Cavagnari, Antonio Marigonda. Measure-theoretic Lie brackets for nonsmooth vector fields. Discrete & Continuous Dynamical Systems - S, 2018, 11 (5) : 845-864. doi: 10.3934/dcdss.2018052 Michel L. Lapidus, Goran Radunović, Darko Žubrinić. Fractal tube formulas and a Minkowski measurability criterion for compact subsets of Euclidean spaces. Discrete & Continuous Dynamical Systems - S, 2019, 12 (1) : 105-117. doi: 10.3934/dcdss.2019007 Tohru Wakasa, Shoji Yotsutani. Representation formulas for some 1-dimensional linearized eigenvalue problems. Communications on Pure & Applied Analysis, 2008, 7 (4) : 745-763. doi: 10.3934/cpaa.2008.7.745 Stefan Erickson, Michael J. Jacobson, Jr., Andreas Stein. Explicit formulas for real hyperelliptic curves of genus 2 in affine representation. Advances in Mathematics of Communications, 2011, 5 (4) : 623-666. doi: 10.3934/amc.2011.5.623 Z. B. Ibrahim, N. A. A. Mohd Nasir, K. I. Othman, N. Zainuddin. Adaptive order of block backward differentiation formulas for stiff ODEs. Numerical Algebra, Control & Optimization, 2017, 7 (1) : 95-106. doi: 10.3934/naco.2017006 Vladimir Srochko, Vladimir Antonik, Elena Aksenyushkina. Sufficient optimality conditions for extremal controls based on functional increment formulas. Numerical Algebra, Control & Optimization, 2017, 7 (2) : 191-199. doi: 10.3934/naco.2017013 Hirobumi Mizuno, Iwao Sato. L-functions and the Selberg trace formulas for semiregular bipartite graphs. Conference Publications, 2003, 2003 (Special) : 638-646. doi: 10.3934/proc.2003.2003.638 2018 Impact Factor: 1.143 PDF downloads (29) HTML views (0) on AIMS Ermal Feleqi Franco Rampazzo Copyright © 2019 American Institute of Mathematical Sciences Export File RIS(for EndNote,Reference Manager,ProCite) Citation and Abstract
CommonCrawl
Differences in the respiratory response to temperature and hypoxia across four life-stages of the intertidal porcelain crab Petrolisthes laevigatus Félix P. Leiva ORCID: orcid.org/0000-0003-0249-92741,2, Cristóbal Garcés1, Wilco C. E. P. Verberk2, Macarena Care1, Kurt Paschke3,4 & Paulina Gebauer1 Marine Biology volume 165, Article number: 146 (2018) Cite this article For aquatic breathers, hypoxia and warming can act synergistically causing a mismatch between oxygen supply (reduced by hypoxia) and oxygen demand (increased by warming). The vulnerability of these species to such interactive effects may differ during ontogeny due to differing gas exchange systems. This study examines respiratory responses to temperature and hypoxia across four life-stages of the intertidal porcelain crab Petrolisthes laevigatus. Eggs, megalopae, juveniles and adults were exposed to combinations of temperatures from 6 to 18 °C and oxygen tensions from 2 to 21 kPa. Metabolic rates differed strongly across life-stages which could be partly attributed to differences in body mass. However, eggs exhibited significantly lower metabolic rates than predicted for their body mass. For the other three stages, metabolic rates scaled with a mass exponent of 0.89. Mass scaling exponents were similar across all temperatures, but were significantly influenced by oxygen tension (the highest at 9 and 14 kPa, and the lowest at 2 kPa). Respiratory responses across gradients of oxygen tension were used to calculate the response to hypoxia, whereby eggs, megalopae and juveniles responded as oxyconformers and adults as oxyregulators. The thermal sensitivity of the metabolic rates (Q10) were dependent on the oxygen tension in megalopae, and also on the interaction between oxygen tension and temperature intervals in adults. Our results thus provide evidence on how the oxygen tension can modulate the mass dependence of metabolic rates and demonstrate changes in respiratory control from eggs to adults. In light of our results indicating that adults show a good capacity for maintaining metabolism independent of oxygen tension, our study highlights the importance of assessing responses to multiple stressors across different life-stages to determine how vulnerability to warming and hypoxia changes during development. Avoid the common mistakes Water temperature notably affects the balance between oxygen supply and demand in aquatic ectotherms (Verberk et al. 2011). Hence, an oxygen perspective may be useful to explain thermal responses in metabolism, body size and differences in species richness across thermal clines as well as the vulnerability of ectotherms to global warming (Van Dijk et al. 1999; Verberk et al. 2011; Verberk and Bilton 2013; Horne et al. 2015). Thermal effects are largely inescapable for aquatic ectotherms, because the thermal conductivity of water is high and physiological processes at all levels of biological organization are impacted by temperature (Hochachka and Somero 2002; Tattersall et al. 2012). Temperature also strongly affects fitness traits (growth, locomotion, reproduction). The interaction of temperature with biological traits such as body mass or environmental stressors such as hypoxia may occur synergistically, limiting the performance of organisms and narrowing their window of thermal tolerance (Frederich and Pörtner 2000; Woods et al. 2009; Moran et al. 2010; Eliason et al. 2011; Verberk and Bilton 2013; Verberk et al. 2016b). Animals can be classified as oxyregulators or oxyconformers depending on their respiratory response to hypoxia (Prosser 1955). Oxyregulators are able to maintain their oxygen consumption rates independently of ambient oxygen levels down to the so-called critical oxygen tension (pcrit). Contrarily, the oxygen consumption of oxyconformers is largely dependent on ambient oxygen levels. Although the establishment of both categories has been subjected to extensive debate (van Winkle and Mangum 1975; Herreid 1980; Pörtner and Grieshaber 1993; Marshall et al. 2013) and the distinction is rarely absolute, a third and less explored response has been suggested. 'Hypoxia sensitive' describes the ability of certain organisms to rapidly decrease their metabolic rate upon slight decreases of oxygen tension (see Fig. 1). It is imperative to apply a quantitative method that covers these different responses (oxyregulators, oxyconformers and hypoxia sensitive) to provide a flexible representation of the inherent causes of variation in metabolic rates (Alexander and McMahon 2004; Mueller and Seymour 2011). Modified from Alexander and McMahon (2004) Schematic representation of hypothetical regulatory capacities across oxygen saturations and their associated oxygen regulation values (RVs, %). RVs are calculated using the area under each curve. An RV of 50% represents oxyconformers (solid line) and RVs above and below 50% denote oxyregulators (segmented lines) and hypoxia-sensitive individuals (dotted line), respectively. Studies on multiple stressors have shown that the early stages of marine invertebrates can be particularly susceptible to the effects of temperature and pH, with larvae being more sensitive than embryos (Przeslawski et al. 2015). Given that life-stages may differ in their vulnerability to multiple stressors, comparing changes in the physiological responses across different life-stages can help improve our understanding of the vulnerability of species to environmental challenges (Kroeker et al. 2013). Although previous studies have evaluated the effects of environmental stressors on the physiological characteristics of invertebrates, they are mostly focused on adult stages and frequently consider temperature as the main stressor. Here, we investigate how the interactive effect of temperature and oxygen tension can trigger different respiratory response patterns for a species of crustacean at different life-stages and modes of gas exchange. Decapod crustaceans are a group of invertebrates that mostly live in aquatic environments. They have complex life cycles, with contrasting physiological characteristics (e.g. related to oxygen uptake through either diffusion or convection) that differ not only across species, but also within species across different stages of development (Walther et al. 2009; Anger 2001; Storch et al. 2011; Jensen et al. 2013; Alter et al. 2015; Fitzgibbon et al. 2015). For eggs and probably larvae, oxygen required for metabolism is obtained primarily via diffusion. In contrast, gas exchange both in juveniles and adults occurs through convective processes, taking place primarily in the gills (Whiteley and Taylor 2015). Moreover, haemocyanin plays a more important role in carrying oxygen through an open circulatory system to the different tissues during these life-stages (Terwilliger 1998), especially at high temperatures (Giomi and Pörtner 2013). Given these morphological and functional differences across decapod life-stages, we hypothesize that responses at the whole-organism level to temperature and oxygen challenges should differ, resulting in a poor capacity to regulate metabolic rate in early life-stages (eggs and larvae) and better regulatory capacity in subsequent life-stages (juveniles and adults). To test this hypothesis, we measured metabolic rates in the benthic life-stages (eggs, megalopae, juveniles and adults) of the intertidal crab Petrolisthes laevigatus at three different temperatures and five different oxygen tensions in a fully factorial design (15 treatments per life-stage). This approach allowed us to estimate the effects of body mass on metabolic rate, compare the degree of respiratory control across life-stages and at different temperatures, and determine thermal sensitivity (Q10) across life-stages and at different oxygen tensions. Animal collection and maintenance Benthic life-stages of Petrolisthes laevigatus were collected in the intertidal zone (Pelluhuin) near Puerto Montt, Chile, between October 2009 (beginning of spring) and March 2010 (end of summer). Sea surface temperatures during the sampling period ranged from 12 °C (October 2009) to 15 °C (March 2010). Ovigerous females, newly settled megalopae, juveniles (carapace length, CL: 2–4 mm) and adults were transported to the Laboratory of Crustacean Ecophysiology (LECOFIC) at the Universidad Austral de Chile. Adults and juveniles were held in 16-L aquaria, megalopae in 0.8-L aquaria in a constant temperature room at 12 ± 1 °C, under a 12 h:12 h light/dark photoperiod without food. Both aquaria were supplied by an open-flow system of continuous filtered seawater (salinity 32, 12 °C). Only animals at the intermolt stage were used in the experiment. Eggs were obtained from ovigerous females using tweezers after 1 day in the laboratory. Eggs in the intermediate stage (between 25 and 50% of the yolk consumed with a barely visible ocular spot: sensu Lardies et al. 2004; Gebauer et al. 2007) from different females (N = 25, carapace length; CL: 12–13 mm) were used and pooled. Experimental setup All life-stages were exposed to one of three temperature treatments (6, 12 and 18 °C) for 24 h, under normoxic conditions (21 kPa), and absence of food. These three temperatures fall within the range of spring and summer temperatures at the study location (4–18 °C; Gebauer et al. 2007). We used a thermostatized bath to increase the seawater temperature to 18 °C and a fridge connected to a thermostat (Danfoss EKC102A) to decrease it to 6 °C. For the intermediate acclimation (12 °C), incubations were set up inside the same temperature- and light-controlled room used for the aforementioned maintenance conditions. After 24 h of normoxia exposure with the corresponding temperatures, each life-stage was exposed to each of the five nominal oxygen tensions: 2.3, 4.7, 9.4, 14.1 and 21.2 kPa (referred hereafter as 2, 5, 9, 14 and 21 kPa). The different oxygen tensions were attained by bubbling nitrogen gas (N2) through the seawater in a 200-L capacity reservoir tank followed by a 10 min equilibration period prior to use. All experiments were conducted inside a temperature- and light-controlled room during daytime to prevent diurnal cycles influencing measurements of metabolic rate. A 12 h:12 h light/dark photoperiod and UV-sterilized and filtered (1 μm) seawater were applied during incubations. Oxygen consumption rates Closed respirometry was used to determine oxygen consumption rates (MR) of eggs, megalopae, juveniles and adults for every temperature/oxygen tension combination (3 × 5 = 15 combinations). Measurements of oxygen tension were made using a needle-type oxygen optic fibre connected to a Microx TX3 AOT (PreSens, Germany), which was calibrated prior to the experiment using a two-point calibration in water (0 and 100% air saturation). Oxygen concentration was measured before and after an incubation period of 3 h for adults, juveniles and megalopae and of 5 h for eggs. Oxygen content never decreased below 80% of initial values following these incubation periods, to prevent potential influences of accumulating metabolites and overlap between the different oxygen tension treatments. Given the differences in the volumes of each developmental stage, we incubated different numbers of animals in different volumes. For adults and juveniles, we allocated one individual per 1- and 0.25-L chamber, respectively. For megalopae, five individuals were incubated per 10-mL plastic disposable syringe, while 70 eggs were incubated per 6-mL plastic syringe. We used ten replicates per life-stage for each combination of temperature and oxygen, and an additional three controls per combination without individuals to estimate and correct for potential bacterial respiration (background respiration). On average, background respiration was never more than 5% of measured respiration rates. To determine dry weights, samples were lyophilized (Savant Novalyphe NL150) for a minimum of 48 h and then weighed (Precisa 290 SCS, ± 0.01 mg). Dry mass (DM) ranged from 5.18 to 7.63 mg for pooled eggs (N = 70), from 0.49 to 0.91 mg for individual megalopae, from 18.65 to 217.12 mg for individual juveniles and from 545.00 to 1571.10 mg for individual adults. Calculation and data analyses Our data analyses were based on versions of linear models. A preliminary analysis indicated that mass-specific metabolic rate varied significantly between life-stage, temperature, oxygen tension as well as all the interactions between two or three of these factors (Table S1, Supplementary Information). However, as stage and body size are highly correlated, this model did not account for potential differences in mass-specific metabolic rate; so, we performed additional analyses to determine the effect of body mass (DM, g), oxygen tension (kPa) and temperature (°C) on the metabolic rate of P. laevigatus. Metabolic rate (MO2, µmol O2 h−1 ind−1) and body mass (DM, g) data were firstly log-transformed (base 10) and fitted to a series of models. The most informative model was selected using the lowest Akaike's information criterion (AIC) (Table S2, Supplementary Information). As these models indicated that the temperature × body mass interaction was non-significant (ANOVA, F(1,584) = 0.09, P = 0.761, N = 588, Table S2), we decided to predict metabolic scaling relationships at different oxygen tensions while setting temperature at an average of 12 °C (Table S2, Supplementary Information). Thus, mass-scaling relationships for each oxygen tension level (2, 5, 9, 14 and 21 kPa) were fitted using the power function Y = aMb where Y is the log-transformed metabolic rate, a is the constant (intercept), M is the log-transformed body mass of each life-stage and b is the scaling exponent (slope) (Kleiber 1932; West et al. 1997). Oxygen regulation values (RV, %) were estimated according to Alexander and McMahon (2004) who used the zebra mussel Dreissena polymorpha as a model species. We calculated this respiratory index for each life-stage and experimental temperature with modifications adopted by Leiva et al. (2015). Regardless of oxygen tension, we assigned the highest oxygen consumption rate the value of 100% and transformed the oxygen consumption rates at the other oxygen tensions as a percentage of this highest value. Therefore, we obtained five different data points, one for each oxygen tension, and these oxygen tensions were transformed to a percentage of oxygen saturation. A third-order polynomial model (chosen on the basis of R2) was fitted to these five points and the area under the curve was calculated by integrating this equation between 0 and 100% of oxygen saturation. The value thus obtained reflects the regulatory capacity of an animal along an oxygen gradient (see Fig. 1). Thus, an oxyconformer will exhibit a value of 50% or close to this, while values above 50% indicate oxyregulatory capacity (becoming maximal at 100%). Values below 50% indicate that animals are sensitive to hypoxia (Alexander and McMahon 2004). For each life-stage and oxygen tension, the thermal sensitivity was determined using the van 't Hoff equation (Q10) as follows: $$ Q_{10} = \left( {\frac{{{\text{MR}}_{2} }}{{{\text{MR}}_{1} }}} \right)^{{\frac{10}{{T_{2} - T_{1} }}}} , $$ where MR1 and MR2 are metabolic rates at temperatures T1 and T2 (when T1 < T2). Our three acclimation temperatures gave three temperature intervals, resulting in 60 Q10 values. We assessed the effects of temperature and life-stage on the regulation values (RVs) using analysis of variance applied to linear models. This was followed by a Tukey pairwise comparison. In addition, t tests were used to assess whether life-stages are oxyregulators, oxyconformers or hypoxia sensitive (i.e. by comparing the mean of their RV against the threshold value of 50%). For these analyses, temperature was included as a categorical variable in our model (see Table S3, Supplementary Information). Similarly, we also applied analysis of variance to assess the effects of oxygen tension, life-stage and temperature intervals (∆Temp) on the Q10 values. Univariate normality assumptions were evaluated graphically by comparing the theoretical and observed distributions of residuals using Q–Q plots (Venables and Ripley 2002) and by applying the Shapiro–Wilk test. Homoscedasticity assumptions were evaluated with Levene's test (Levene 1960) applying a significance level of 0.05. Residuals of the models were Box–Cox transformed to correct for heteroscedasticity for the Q10 analyses only (Box and Cox 1964). All analyses and the drafting of figures were carried out using R Statistical Software (R Core Team 2012). Log-transformed metabolic rates were strongly related to the log-transformed body mass of Petrolisthes laevigatus scaling positively with an overall exponent of 1.05 ± 0.02, i.e. near isometric scaling (Fig. 2a). However, eggs demonstrated lower metabolic rates than expected for their body size. The model fit was greatly improved by accounting for this difference between eggs and other life-stages (i.e. by including a binary variable differentiating between eggs and non-eggs). This decreased the Akaike's information criterion (AIC) value by 506.15 points. Metabolic rate scaled with body mass allometrically (0.89 ± 0.01) for the remaining three life-stages (Fig. 2a). Mass exponents also varied with oxygen tension (Log DM × oxygen tension: (ANOVA, F(4,576) = 9.22, P = 3.122e−07), reaching the lowest point (0.83 ± 0.02) at 2 kPa, and the highest point (0.95 ± 0.02) at 9 and 14 kPa (Fig. 2b and Table S2). Intermediate values of mass exponent were found at 5 and 21 kPa, being on average ca. 0.88 (Fig. 2b). Mass scaling of the metabolic rate in different life-stages of Petrolisthes laevigatus. a Lines represent model fitted to data from all life-stages (solid grey line) and all life-stages except eggs (solid black line). b Lines represent model fits for each oxygen tension excluding eggs. All fitted lines represent the average of the three temperatures (12 °C). No significant interaction was found between temperature and body mass Regulation values (RVs, %) differed across P. laevigatus life-stages (ANOVA, F(3,6) = 7.90, P = 0.0166), but were not influenced by temperature (ANOVA, F(2,6) = 1.01, P = 0.4159) (Fig. 3 and Table 1). Our linear model indicated that the mean RV for eggs (53.61 ± 6.01%) was not significantly different from the mean RV for megalopae (44.89 ± 2.99%) or juveniles (44.46 ± 11.65%) (Tukey test, P > 0.05, Fig. 3). The average RV for the egg–megalopae–juveniles group was 47.65 ± 5.16% (not significantly different from 50%) and these life-stages were classified as oxyconformers (Fig. 3). In contrast, adults had a consistently higher RV (69.59 ± 3.69%) and were categorized as oxyregulators. Oxygen regulation values (RVs, %) for each experimental temperature across all life-stages of Petrolisthes laevigatus. Oxyconformity is represented on each graph by a horizontal segmented line indicating an RV of 50%. Index values above or below this line represent oxyregulator or hypoxia-sensitive individuals, respectively Table 1 Outcome of linear model using type II sums of squares showing the effects of temperature and life-stage on regulation values (RV) and oxygen tension, life-stage, and temperature intervals (∆Temp) on Q10 values of Petrolisthes laevigatus Thermal responses in oxygen consumption rates, measured as Q10 values, were affected by the interaction between oxygen tension, life-stage and ∆Temp (three-way ANOVA, F(6,35) = 6.53, P = 0.0001) (Fig. 4, Table 1 and Table S3 Supplementary Information). Simplified models of each life-stage showed that megalopae are affected by oxygen tension (two-way ANOVA, F(1,9) = 8.73, P = 0.0160) (Table 2) and that adults are affected by the interaction between oxygen tension × ∆Temp (two-way ANOVA, F(2,8) = 13.36, P = 0.0014) (Table 2). These results indicate that oxygen tension influences the ability of megalopae and adults to increase or decrease oxygen consumption in response to changes in environmental temperature. Thermal sensitivity of metabolic rate (expressed as a Q10 value) as a function of the oxygen tension (kPa) for different life-stages of Petrolisthes laevigatus. Black dots: 18 and 6 °C; grey dots: 18 and 12 °C; white dots: 12 and 6 °C. Note that Q10 values measured at 21 kPa overlap for juveniles Table 2 Outcome of linear model using type II sums of squares showing the effects of oxygen tension and temperature intervals (∆Temp) on Q10 values for each Petrolisthes laevigatus life-stage Our approach provides metabolic rate estimations (ca. 600 measurements) in the intertidal crab P. laevigatus while exposed to different combinations of oxygen tension and temperature. These estimates allowed us to infer how biological (body mass and life-stages) and environmental (oxygen and temperature) modulators may affect the metabolism of this crustacean species. We found that metabolic rate scaled allometrically with body mass in P. laevigatus across postembryonic life-stages. However, since mass-specific metabolic rates in eggs were lower than other life-stages, a lower mass exponent was calculated when only megalopae, juveniles and adults were considered (see Fig. 2a). The low metabolic rates observed in eggs are probably the result of yolk reserves that have been formed during the late stages of oogenesis (Nagaraju 2011). The yolk is metabolically inert, yet affects the body mass values and concomitantly results in lower mass-specific metabolic rates (Petersen and Anger 1997; Anger 2001). The oxygen demand of eggs increases as they develop and convert yolk into metabolically active tissue. At the same time, the gas exchange area remains relatively constant and the egg membrane acts as a diffusion barrier which can lead to a mismatch between oxygen supply and demand. The consequences of such oxygen limitation are hatching delays (Fernández et al. 2003) and subsequent catch-up growth even when favourable conditions for larval life are reinstated (Petersen and Anger 1997; Warkentin 2002; Horváthová et al. 2017). Several studies have shown changes in mass exponents through the ontogeny of different taxa which is in agreement with our findings (e.g. Killen et al. 2007; Frappell 2008). This includes crustacean studies. For example, in a study of the eastern lobster Sagmariasus verreauxi, the mass exponent changed from 0.97 in the planktonic phyllosoma stage to 0.83 in juveniles (Jensen et al. 2013). Glazier (2006) suggested that such transitions in the scaling from isometry to allometry are associated with an ontogenetic change in the surface area to volume ratio of respiratory organs. Such changes occur frequently among marine invertebrates with complex life cycles where the different life-stages exhibit large contrasts in morphology and physiology. In P. laevigatus, changes to the respiratory system occur during metamorphosis, where functional gills appearing in the juvenile stage become fully developed in adult life-stages. Although the presence of gills in zoea larvae has been suggested for other anomuran species such as Lithodes santolla, these do not apparently play a role in gas exchange (Paschke et al. 2010). While studies that aim to evaluate the effects of environmental stressors (like temperature and oxygen tension) on the metabolic rate of crustaceans are common (e.g. Grieshaber et al. 1993; Burnett and Stickle 2001; Paschke et al. 2010; Leiva et al. 2015, 2016), only a few studies have evaluated whether these effects are differentially expressed for large and small bodied species (i.e. whether environmental variables modify the mass exponent). Studies on this topic have shown that temperature (Glazier 2005; Killen et al. 2010; Verberk and Atkinson 2013; Carey and Sigwart 2014) and oxygen tension (Urbina and Glover 2013) influence the mass exponent. Our study is one of the first that explores the effects of oxygen tension on the mass scaling of a crustacean species. In our study, we clearly demonstrate that oxygen tension alters the metabolic scaling in P. laevigatus; scaling exponents increase with increasing oxygen tension up to 9–14 kPa before declining again. Interestingly, Urbina and Glover (2013) showed that scaling exponents peaked at intermediate oxygen tensions in inanga Galaxias maculatus in a similar way. Although our limited data set does not allow for detailed inferences about the mechanistic basis of the observed response, it does show that physiological responses to low oxygen exposure co-vary with size and life-stage. Ontogeny-related processes such as the regulation of metabolic rates (Spicer and El-Gamal 1999), functional changes on subunits of oxygen transport proteins (Terwilliger and Brown 1993; Brown and Terwilliger 1999) and the development of the cardiovascular system (Harper and Reiber 2006; Rudin-Bitterli et al. 2016) could contribute to the variation in mass exponent in relation to oxygen tension described here. In our study, the largest life-stage (i.e. adults) had the strongest oxyregulatory capacity, and such physiological differences across P. laevigatus life-stages could explain the effect of oxygen tension on metabolic scaling: if under mild hypoxia, the oxygen consumption rates of the oxyconforming life-stages (i.e. the eggs, megalopa and juveniles) decreases, while the oxyregulatory adults are able to maintain oxygen consumption rates, this will result in a steeper scaling relationship at mild hypoxia, but not at normoxia or severe hypoxia (when the oxygen consumption rates of adults also decline). Interestingly, and in contrast to other studies (e.g. Killen et al. 2010; Carey and Sigwart 2014), any effect of temperature on metabolic scaling was evident. All life-stages evaluated in this study spend most of their life span in the intertidal zone, usually on rocky shores. They are normally all exposed to thermal fluctuations in their habitats which explains why temperature effects in oxyregulatory capacity were similar at different life-stages. We predicted that variations in oxyregulatory capacity between life-stages also reflect the oxygen conditions found in their habitats. As expected, eggs' metabolic rates decrease linearly with oxygen tension, probably as a result of their limited gas exchange system. This is perhaps not surprising, as the ventilation of eggs (and hence oxygen supply) is enhanced by the behaviour of ovigerous females, a form of parental care which reduces the need for active gas exchange. For example, an increase in oxygen supply as a result of abdominal flapping to eggs exposed to hypoxia of 2 kPa has been described for the hairy edible crab Pseudograpsus setosus (formerly Cancer setosus) (Fernández and Brante 2003). Respiration patterns observed in this study are different from those recently described for the same species using the traditional (pcrit) estimation (Alter et al. 2015). These authors found that eggs and juveniles were capable of maintaining their metabolism regardless of oxygen tension up to 15 and 5 kPa, respectively. For comparative purposes, we also estimated the pcrit according to Mueller and Seymour (2011) for all life-stages in our study. However, the absence of inflection points in the oxygen consumption of eggs, megalopae and juveniles prevented us from obtaining a reliable value for this estimator, restricting the results to the adult group (see Fig. S1, Supplementary Information). Differences in the origin of experimental animals may provide explanations for the inconsistencies between studies. To obtain eggs, Alter et al. (2015) reared their experimental ovigerous females in the laboratory, while megalopae were caught in the field and then reared until they metamorphosed to juveniles. In contrast, we obtained all life-stages from the field and maintained them for a short time in the laboratory. This suggests that environmental history may be important in shaping physiological performance to short-term exposures (Castillo and Helmuth 2005; Leiva et al. 2016). Future experiments should account for this initial variability. Moreover, future studies should determine whether later larval life-stages are more sensitive due to their larger mass and diffusion distances or because of their rudimentary cardiorespiratory anatomy, as suggested for other crustacean species (Fitzgibbon et al. 2015). According to the oxygen and capacity-limited thermal tolerance (OCLTT) hypothesis (Pörtner 2001), a higher thermal sensitivity of oxygen demand should make an animal prone to heat stress because higher oxygen requirements imposed by warming cannot always be matched by a concomitant increase in oxygen supply. Conversely, a higher oxygen uptake capacity should make an animal less susceptible to heat stress. Indeed, some studies have found links between heat tolerance and both (1) thermal sensitivity of oxygen consumption rates (Verberk and Bilton 2011) and (2) differences in capacity for regulating oxygen uptake (Verberk and Bilton 2013). However, it is worth noting that high Q10 values can be interpreted both as having a high thermal sensitivity of oxygen demand or as having a high capacity for oxygen uptake (Verberk et al. 2016a). According to our results, Q10 values were dependent on the interaction between life-stage, oxygen tension and ΔTemp. The highest Q10 values were observed at higher oxygen tensions, especially in megalopae and adults. These values were statistically different from those observed during hypoxic conditions, suggesting that the newly settled megalopae used in our study show sensitivity to unstable environmental conditions similar to those present in the intertidal zone. Despite this, a high Q10 value was observed at 21 kPa for adults (ca. 20.37, Fig. 4) as a result of low metabolic rates observed at 6 °C (see Figs. S1 and S4, Supplementary Information). It remains unclear why these low metabolic rates occur. In summary, our study demonstrates that environmental oxygen tension can affect the body mass scaling of metabolic rates in P. laevigatus and provides a good estimation of how respiratory capacity is depressed by oxygen supply. Such patterns demonstrate that different life-stages exhibit differences in oxyregulatory capacity. P. laevigatus adults represented the only life-stage that showed good capacity to maintain metabolism independent of oxygen tension. Other life-stages (eggs, megalopae and juveniles) were oxyconformers. These responses may reflect the environmental history of conditions experienced by these life-stages. Finally, our study adds evidence to the increasingly active debate on how different life-stages exhibit distinct responses to the effects of warming and hypoxia. Alexander JE, McMahon RF (2004) Respiratory response to temperature and hypoxia in the zebra mussel Dreissena polymorpha. Comp Biochem Physiol A Mol Integr Physiol 137:425–434 Alter K, Paschke K, Gebauer P, Cumillaf JP, Pörtner HO (2015) Differential physiological responses to oxygen availability in early life-stages of decapods developing in distinct environments. Mar Biol 162:1111–1124 Anger K (2001) The biology of decapod crustacean larvae. Balkema, Amsterdam Box GEP, Cox DR (1964) An analysis of transformations. J R Stat Soc Ser B Stat Methodol 26:211–252 Brown AC, Terwilliger NB (1999) Developmental changes in oxygen uptake in Cancer magister (Dana) in response to changes in salinity and temperature. J Exp Mar Biol Ecol 241:179–192 Burnett LE, Stickle WB (2001) Physiological responses to hypoxia. In: Rabalais NN, Turner RE (eds) Coastal hypoxia: consequences for living resources and ecosystems. American Geophysical Union, Washington, DC, pp 101–114 Carey N, Sigwart JD (2014) Size matters: plasticity in metabolic scaling shows body-size may modulate responses to climate change. Biol Lett 10:20140408 Castillo KD, Helmuth BST (2005) Influence of thermal history on the response of Montastraea annularis to short-term temperature exposure. Mar Biol 148:261–270 Eliason EJ, Clark TD, Hague MJ, Hanson LM, Gallagher ZS, Jeffries KM, Gale MK, Patterson DA, Hinch SG, Farrell AP (2011) Differences in thermal tolerance among sockeye salmon populations. Science 332:109–112 Fernández M, Brante A (2003) Brood care in Brachyuran crabs: the effect of oxygen provision on reproductive costs. Rev Chil Hist Nat 76:157–168 Fernández M, Ruiz-Tagle N, Cifuentes S, Pörtner HO, Arntz W (2003) Oxygen-dependent asynchrony of embryonic development in embryo masses of brachyuran crabs. Mar Biol 142:559–565 Fitzgibbon QP, Ruff N, Battaglene SC (2015) Cardiorespiratory ontogeny and response to environmental hypoxia of larval spiny lobster, Sagmariasus verreauxi. Comp Biochem Physiol A Mol Integr Physiol 184:76–82 Frappell PB (2008) Ontogeny and allometry of metabolic rate and ventilation in the marsupial: matching supply and demand from ectothermy to endothermy. Comp Biochem Physiol A Mol Integr Physiol 150:181–188 Frederich M, Pörtner HO (2000) Oxygen limitation of thermal tolerance defined by cardiac and ventilatory performance in spider crab, Maja squinado. Am J Physiol Regul Integr Comp Physiol 279:R1531–R1538 Gebauer P, Paschke K, Moreno CA (2007) Reproductive biology and population parameters of Petrolisthes laevigatus (Anomura: Porcellanidae) in southern Chile: consequences on recruitment. J Mar Biol Assoc UK 87:729–734 Giomi F, Pörtner HO (2013) A role for haemolymph oxygen capacity in heat tolerance of eurythermal crabs. Front Physiol 4:1–12 Glazier DS (2005) Beyond the '3/4-power law': variation in the intra-and interspecific scaling of metabolic rate in animals. Biol Rev 80:611–662 Glazier DS (2006) The 3/4-power law is not universal: evolution of isometric, ontogenetic metabolic scaling in pelagic animals. Bioscience 56:325–332 Grieshaber MK, Hardewig I, Kreutzer U, Pörtner HO (1993) Physiological and metabolic responses to hypoxia in invertebrates. Rev Physiol Biochem Pharmacol 125:43–147 Harper SL, Reiber CL (2006) Metabolic, respiratory and cardiovascular responses to acute and chronic hypoxic exposure in tadpole shrimp Triops longicaudatus. J Exp Biol 209:1639–1650 Herreid CF (1980) Hypoxia in invertebrates. Comp Biochem Physiol A Physiol 67:311–320 Hochachka PW, Somero G (2002) Biochemical adaptation. Mechanism and process in physiological evolution. Oxford University Press, New York Horne CR, Hirst A, Atkinson D (2015) Temperature-size responses match latitudinal-size clines in arthropods, revealing critical differences between aquatic and terrestrial species. Ecol Lett 18:327–335 Horváthová T, Antoł A, Czarnoleski M, Kozłowski J, Bauchinger U (2017) An evolutionary solution of terrestrial isopods to cope with low atmospheric oxygen levels. J Exp Biol 220:1563–1567 Jensen MA, Fitzgibbon QP, Carter CG, Adams LR (2013) Effect of body mass and activity on the metabolic rate and ammonia-N excretion of the spiny lobster Sagmariasus verreauxi during ontogeny. Comp Biochem Physiol A Mol Integr Physiol 166:191–198 Killen SS, Costa I, Brown JA, Gamperl AK (2007) Little left in the tank: metabolic scaling in marine teleosts and its implications for aerobic scope. Proc R Soc Lond B Biol Sci 274:431–438 Killen SS, Atkinson D, Glazier DS (2010) The intraspecific scaling of metabolic rate with body mass in fishes depends on lifestyle and temperature. Ecol Lett 13:184–193 Kleiber M (1932) Body size and metabolism. Hilgardia 11:315–353 Kroeker KJ, Kordas RL, Crim R, Hendriks IE, Ramajo L, Singh GS, Duarte CM, Gattuso J-P (2013) Impacts of ocean acidification on marine organisms: quantifying sensitivities and interaction with warming. Glob Change Biol 19:1884–1896 Lardies MA, Rojas JM, Wehrtmann IS (2004) Breeding biology and population structure of the intertidal crab Petrolisthes laevigatus (Anomura: Porcellanidae) in central-southern Chile. J Nat Hist 38:375–388 Leiva FP, Urbina MA, Cumillaf JP, Gebauer P, Paschke K (2015) Physiological responses of the ghost shrimp Neotrypaea uncinata (Milne Edwards 1837) (Decapoda: Thalassinidea) to oxygen availability and recovery after severe environmental hypoxia. Comp Biochem Physiol Part A 189:30–37 Leiva FP, Niklitschek EJ, Paschke K, Gebauer P, Urbina MA (2016) Tide-related biological rhythm in the oxygen consumption rate of ghost shrimp (Neotrypaea uncinata Milne Edwards). J Exp Biol 219:1957–1960 Levene H (1960) Robust tests for equality of variances. In: Olkin I, Ghurye SG, Hoeffding W, Madow WG, Mann HB (eds) Contributions to probability and statistics: essays in honor of Harold Hotelling. Stanford University Press, Stanford, pp 278–292 Marshall DJ, Bode M, White CR (2013) Estimating physiological tolerances—a comparison of traditional approaches to nonlinear regression techniques. J Exp Biol 216:2176–2182 Moran R, Harvey I, Moss B, Feuchtmayr H, Hatton K, Heyes T, Atkinson D (2010) Influence of simulated climate change and eutrophication on three-spined stickleback populations: a large scale mesocosm experiment. Freshw Biol 55:315–325 Mueller CA, Seymour RS (2011) The regulation index: a new method for assessing the relationship between oxygen consumption and environmental oxygen. Physiol Biochem Zool 84:522–532 Nagaraju GPC (2011) Reproductive regulators in decapod crustaceans: an overview. J Exp Biol 214:3–16 Paschke K, Cumillaf JP, Loyola S, Gebauer P, Urbina M, Chimal ME, Pascual C, Rosas C (2010) Effect of dissolved oxygen level on respiratory metabolism, nutritional physiology, and immune condition of southern king crab Lithodes santolla (Molina, 1782) (Decapoda, Lithodidae). Mar Biol 157:7–18 Petersen S, Anger K (1997) Chemical and physiological changes during the embryonic development of the spider crab, Hyas araneus L. (Decapoda: Majidae). Comp Biochem Physiol B Biochem Mol Biol 117:299–306 Pörtner HO (2001) Climate change and temperature-dependent biogeography: oxygen limitation of thermal tolerance in animals. Naturwissenschaften 88:137–146 Pörtner HO, Grieshaber MK (1993) Critical PO2(s) in oxyconforming and oxyregulating animals gas exchange, metabolic rate and the mode of energy production. In: Eduardo J, Bicudo PW (eds) The vertebrate gas transport cascade adaptations to environment and mode of life. CRC Press, Boca Raton, pp 330–357 Prosser C (1955) Physiological variation in animals. Biol Rev 30:229–261 Przeslawski R, Byrne M, Mellin C (2015) A review and meta-analysis of the effects of multiple abiotic stressors on marine embryos and larvae. Glob Change Biol 21:2122–2140 R Core Team (2012) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna Rudin-Bitterli TS, Spicer JI, Rundle SD (2016) Differences in the timing of cardio-respiratory development determine whether marine gastropod embryos survive or die in hypoxia. J Exp Biol 219:1076–1085 Spicer JI, El-Gamal MM (1999) Hypoxia accelerates the development of respiratory regulation in brine shrimp-but at a cost. J Exp Biol 202:3637–3646 Storch D, Fernández M, Navarrete SA, Pörtner HO (2011) Thermal tolerance of larval stages of the Chilean kelp crab Taliepus dentatus. Mar Ecol Prog Ser 429:157–167 Tattersall GJ, Sinclair BJ, Withers PC, Fields PA, Seebacher F, Cooper CE, Maloney SK (2012) Coping with thermal challenges: physiological adaptations to environmental temperatures. Compr Physiol 2:2151–2195 Terwilliger NB (1998) Functional adaptations of oxygen-transport proteins. J Exp Biol 201:1085–1098 Terwilliger NB, Brown AC (1993) Ontogeny of hemocyanin function in the Dungeness crab Cancer magister: the interactive effects of developmental stage and divalent cations on hemocyanin oxygenation properties. J Exp Biol 183:1–13 Urbina MA, Glover CN (2013) Relationship between fish size and metabolic rate in the oxyconforming inanga Galaxias maculatus reveals size-dependent strategies to withstand hypoxia. Physiol Biochem Zool 86:740–749 Van Dijk PL, Tesch C, Hardewig I, Pörtner HO (1999) Physiological disturbances at critically high temperatures: a comparison between stenothermal Antarctic and eurythermal temperate eelpouts (Zoarcidae). J Exp Biol 202:3611–3621 van Winkle W, Mangum C (1975) Oxyconformers and oxyregulators: a quantitative index. J Exp Mar Biol Ecol 17:103–110 Venables WN, Ripley BD (2002) Modern applied statistics with S, 4th edn. Springer, New York Verberk WCEP, Atkinson D (2013) Why polar gigantism and Palaeozoic gigantism are not equivalent: effects of oxygen and temperature on the body size of ectotherms. Funct Ecol 27:1275–1285 Verberk W, Bilton DT (2011) Can oxygen set thermal limits in an insect and drive gigantism. PLoS ONE 6:e22610 Verberk WCEP, Bilton DT (2013) Respiratory control in aquatic insects dictates their vulnerability to global warming. Biol Lett 9:20130473 Verberk WCEP, Bilton DT, Calosi P, Spicer JI (2011) Oxygen supply in aquatic ectotherms: partial pressure and solubility together explain biodiversity and size patterns. Ecology 92:1565–1572 Verberk WC, Bartolini F, Marshall DJ, Pörtner HO, Terblanche JS, White CR, Giomi F (2016a) Can respiratory physiology predict thermal niches? Ann N Y Acad Sci 1365:73–88 Verberk WC, Durance I, Vaughan IP, Ormerod SJ (2016b) Field and laboratory studies reveal interacting effects of stream oxygenation and warming on aquatic ectotherms. Glob Change Biol 22:1769–1778 Walther K, Sartoris FJ, Bock C, Pörtner HO (2009) Impact of anthropogenic ocean acidification on thermal tolerance of the spider crab Hyas araneus. Biogeosciences 6:2207–2215 Warkentin KM (2002) Hatching timing, oxygen availability, and external gill regression in the tree frog, Agalychnis callidryas. Physiol Biochem Zool 75:155–164 West GB, Brown JH, Enquist BJ (1997) A general model for the origin of allometric scaling laws in biology. Science 276:122–126 Whiteley NM, Taylor EW (2015) Responses to environmental stresses: oxygen, temperature and pH. In: Chang ES, Thiel M (eds) Physiology. The natural history of the Crustacea. Oxford University Press, New York, pp 320–358 Woods HA, Moran AL, Arango CP, Mullen L, Shields C (2009) Oxygen hypothesis of polar gigantism not supported by performance of Antarctic pycnogonids in hypoxia. Proc R Soc Lond B Biol Sci 276:1069–1075 This research was funded by Fondo Nacional de Desarrollo Científico y Tecnológico de CONICYT- FONDECYT REGULAR 1080157 and 1110637 and DI-Ulagos R5/15. We thank the three anonymous reviewers for suggestions and constructive criticism that improved the quality of this manuscript. We thank Jon Matthews for language improvements. Centro i~mar, Universidad de Los Lagos, Casilla 557, Puerto Montt, Chile Félix P. Leiva, Cristóbal Garcés, Macarena Care & Paulina Gebauer Department of Animal Ecology and Physiology, Institute for Water and Wetland Research, Radboud University, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands Félix P. Leiva & Wilco C. E. P. Verberk Instituto de Acuicultura, Universidad Austral de Chile, Casilla 1327, Puerto Montt, Chile Kurt Paschke Centro FONDAP de Investigación en Dinámica de Ecosistemas Marinos de Altas Latitudes (IDEAL), Valdivia, Chile Félix P. Leiva Cristóbal Garcés Wilco C. E. P. Verberk Macarena Care Paulina Gebauer FPL, CG and WCEPV carried out the calculations and statistical analyses. KP and PG conceived and designed the study and MC was responsible for the respiration measurements. FPL and WCEPV wrote the first draft of this manuscript with input from other authors. All authors read and approved the final manuscript. Correspondence to Félix P. Leiva. Research involving human participants and/or animals All applicable international, national, and/or institutional guidelines for the care and use of animals were followed. Responsible Editor: H.-O. Poertner. Reviewed by undisclosed experts. Below is the link to the electronic supplementary material. Supplementary material 1 (PDF 932 kb) Leiva, F.P., Garcés, C., Verberk, W.C.E.P. et al. Differences in the respiratory response to temperature and hypoxia across four life-stages of the intertidal porcelain crab Petrolisthes laevigatus. Mar Biol 165, 146 (2018). https://doi.org/10.1007/s00227-018-3406-z DOI: https://doi.org/10.1007/s00227-018-3406-z
CommonCrawl
supplementary_final.pdf Predicting global community properties from uncertain estimates of interaction strengths György Barabás1, Stefano Allesina 2,* Department of Physics, Chemistry and Biology (IFM), Institute of Technology - Linköping University Department of Ecology & Evolution, Division of Biological Sciences - University of Chicago Qualitative properties of solutions The community matrix measures the direct effect of species on each other in an ecological community. It can be used to determine whether a system is stable (returns to equilibrium after small perturbations of the population abundances), reactive (perturbations are initially amplified before damping out), and to determine the response of any individual species to perturbations of environmental parameters. However, several studies show that small errors in estimating the entries of the community matrix translate into large errors in predicting individual species responses. Here we ask if there are properties of complex communities one can still predict using only a crude, order-of-magnitude estimate of the community matrix entries. Using empirical data, randomly generated community matrices, and those generated by the Allometric Trophic Network model, we show that the stability and reactivity properties of systems can be predicted with good accuracy. We also provide theoretical insight into when and why our crude approximations are expected to yield an accurate description of communities. Our results indicate that even rough estimates of interaction strengths can be useful for assessing global properties of large systems. Ecological communities can be modeled through a set of deterministic differential equations keeping track of population growth as a function of the (biotic and abiotic) environment. One central question in the study of communities is their stability [15]: when perturbing the population abundances slightly, does the community tend to return to its original state? This question naturally follows from the fact that, in nature, populations undergo constant perturbations, which they have to withstand to avoid extinction. Mathematically, a community equilibrium is locally stable if the Jacobian matrix, evaluated at that equilibrium, has eigenvalues with all negative real parts. The Jacobian evaluated at an equilibrium point is called the community matrix [11], whose $(i,j)$th entry measures the change in the total population growth rate of species $i$ in response to a (small) change in species $j$'s abundance, in units of inverse time. This matrix has many useful properties in addition to determining local stability. For instance, a stable equilibrium is reactive [20] if perturbations, before damping, are initially amplified in a transient manner. Reactivity is measured by the leading eigenvalue of the Hermitian part of the community matrix, with positive values signaling reactive systems. In addition, the inverse community matrix can be used to determine the response of any species in the community to perturbations of the environment (i.e., not the abundances) via a community-wide sensitivity formula [12,6,29,18,2,4]. However, studies have revealed [29,30,8,21] that even small uncertainties in estimating the entries of the community matrix translate into large errors of prediction. The problem is that small perturbations to the matrix can have large effects on the inverse matrix, to the point where even the directionality of the species' responses to environmental perturbations are predicted erroneously. Since the eigenvalues of the inverse matrix are the inverses of the eigenvalues, we do not expect the inverse matrix to be ill-behaved as long as all eigenvalues are far from the origin of the complex plane. But problems arise when some eigenvalues are close to zero: then the slightest error in measurement may lead to qualitatively different outcomes. Suppose the leading eigenvalue of a community matrix is $(-0.01 \pm 0.02) / \text{year}$; its inverse is then either smaller than $-33.3 \; \text{years}$, or larger than $100 \; \text{years}$. A slight measurement error can make the difference in whether the inverse matrix is deemed to have a large negative or a large positive eigenvalue. We do not know for certain whether community matrices of large natural systems possess eigenvalues that are close to zero, but a heuristic argument can be made that this is indeed the case based on the shape of the species-abundance curve [17], which shows that rare species are always overrepresented in natural communities. Since even small perturbations of the abundances could knock such rare species to extinction, the system must lie close to a transcritical bifurcation, meaning that some eigenvalues are necessarily very close to zero. In large systems, it is already logistically impossible to measure every pairwise interaction (a community of $100$ species with connectance $0.1$ has $1000$), let alone doing so with high accuracy. This fact, combined with the above argument, seems to imply the futility of relying on the "inverse problem" to obtain species' responses to environmental perturbations. What one can do, however—and this is the focus of our work—is to use imperfect information about the system to estimate properties which do not rely on inverting the community matrix, such as stability and reactivity. By avoiding the matrix inversion, small errors no longer translate into large ones, and so even crude estimates may provide useful information about a system. Here we ask how well one can approximate the eigenvalue distribution of community matrices based on only an order-of-magnitude knowledge of interaction coefficients. Even though accurate measurement of all matrix entries is impossible, ecologists with extensive field experience can frequently rely on their intuition to classify interactions as "strong" or "weak". We assume that the magnitude of the strongest interactions are known (reasoning that, since they are strong, they are the most likely to be noticed and possibly the easiest to measure), and coarse-grain each matrix entry into bins, based on their relative magnitudes in comparison to the strongest interactions. We show that the eigenvalue structure of complex community matrices can be captured well using this procedure, and therefore such qualitative information can be used to approximate system properties not relying on the inverse. Below, after describing how we construct approximate matrices using imperfect data, we show how this procedure works on empirical datasets. We then apply the procedure to matrices that are randomly generated, as well as those generated by the Allometric Trophic Network model [7]. We then go on to give a theoretical justification to our method, building on the theory of random matrices [3] and pseudospectra [26]. We end by discussing the limitations of our approach and its relevance for the classic stability-complexity debate in community ecology. Constructing approximate matrices We are given a community matrix $A$, and we would like to know its eigenvalues, but information on $A$'s entries is limited. Quantitatively, we assume we know only the magnitudes of the largest positive and negative entry (denoted by $p$ and $n$, respectively), and the zero entries of $A$, i.e., we know which interactions are absent. Apart from this quantitative information, we assume a qualitative knowledge of all other matrix entries: based on expert opinion or other indirect information, we know whether a given entry is strong or weak compared to $n$ and $p$. Based on this, we assign numerical bins into which the entries of $A$ will be lumped. Let $B$ denote this binned approximation to $A$. We ask how well the spectrum of $B$ approximates that of $A$. We thus need to specify a binning procedure. First, we choose a number of bins. We then assign numerical values to these bins. Finally, each entry of $B$ is set to the value of the bin closest to the value of the corresponding entry in $A$. For a given binning, we use the notation \[ (x_1, \; x_2, \; \ldots, \; x_k), \] meaning that the first bin goes from $x_1$ to halfway between $x_1$ and $x_2$, the second goes from halfway between $x_1$ and $x_2$ to halfway between $x_2$ and $x_3$, and so on, until the last bin going from halfway between $x_{k-1}$ and $x_k$ to $x_k$. As an example, consider the matrix \[ A = \begin{pmatrix} 7.8 & 6.7 & 3.7 & -1.2 \\ -7.5 & 2.6 & -7.4 & 0 \\ -10.0 & -6.9 & 0.4 & 5.8 \\ 0 & 0 & 10.0 & -8.7 \end{pmatrix} . \] Its strongest negative entry is $n = -10$; its strongest positive entry is $p = 10$. If we now decide to construct $B$ using three bins with values $(n, \; 0, \; p)$, we get \[ A = \begin{pmatrix} 7.8 & 6.7 & 3.7 & -1.2 \\ -7.5 & 2.6 & -7.4 & 0 \\ -10.0 & -6.9 & 0.4 & 5.8 \\ 0 & 0 & 10.0 & -8.7 \end{pmatrix} \quad \Rightarrow \quad B = \begin{pmatrix} 10 & 10 & 0 & 0 \\ -10 & 0 & -10 & 0 \\ -10 & -10 & 0 & 10 \\ 0 & 0 & 10 & -10 \end{pmatrix} . \] In principle, the choice for the number of bins and their values is arbitrary. Here we consider the following, more specific procedure. Let the number of bins be $k \ge 3$ an odd integer. Let us specify a binning resolution constant $b$ whose powers help define the bins; in effect, $b$ fixes the definition of an order of magnitude. The $k$ bins are then given by \[ (n, \; nb^{-1}, \; nb^{-2}, \; \ldots, \; nb^{-(k-1)/2}, \; 0, \; pb^{-(k-1)/2}, \; \ldots, \; pb^{-2}, \; pb^{-1}, p) . \] Using our previous example for $A$, we can bin $A$ with $k = 5$ and binning resolution $b = 10$. Since $n = -10$ and $p = 10$, the bins are $(-10, \; -1, \; 0, \; 1, \; 10)$: \[ A = \begin{pmatrix} 7.8 & 6.7 & 3.7 & -1.2 \\ -7.5 & 2.6 & -7.4 & 0 \\ -10.0 & -6.9 & 0.4 & 5.8 \\ 0 & 0 & 10.0 & -8.7 \end{pmatrix} \quad \Rightarrow \quad B = \begin{pmatrix} 10 & 10 & 1 & -1 \\ -10 & 1 & -10 & 0 \\ -10 & -10 & 0 & 10 \\ 0 & 0 & 10 & -10 \end{pmatrix} . \] Our scheme for binning matrix entries involves exponentially shrinking bin sizes. Any number of other schemes may be implemented—e.g., linear binning, where adjacent bins are equally spaced. The reason for choosing the exponential scheme is that there is both theoretical [24] and empirical [28] evidence that the interaction strengths in large ecological networks follow a distribution close to lognormal. Therefore the exponential binning strategy is expected to yield better resolution of the underlying data than a linear one. This is not to say that different binning schemes are not better suited to different problems. However, regardless of the problem at hand, one should make sure that the procedure does not depend on the particular choice of units the matrix entries are expressed in, which means that the procedure should only be sensitive to the relative magnitudes of the matrix entries, not their actual numerical values which are unit-dependent. The exponential scheme obviously satisfies this requirement. Exploratory analysis: some empirical datasets We show the eigenvalues (red dots) of community matrices from nine different empirical interaction webs in Fig. 1, parameterized using allometric scaling relationships [24]. These matrices were binned with $b=4$, $k=7$; the eigenvalues of the binned matrices (blue circles) are also shown. The number of species in each web is indicated in the panel titles. To interpret the eigenvalue distributions correctly, note the scale discrepancy between the real and imaginary axes. Figure 1 Figure 1. Eigenvalues of community matrices (red dots) and their binned counterparts (blue circles) from nine different empirical interaction webs. Datasets used (left-right, top-bottom): Broadstone Stream, Baja San Quintin, Carpinteria Salt Marsh, Estero de Punta Banda, Kongs Fjorden, Lough Hyne, Caribbean Reef, Weddell Sea, and Ythan Estuary (see [24] and references therein). The Kolmogorov-Smirnov distance between the original and binned eigenvalue distributions are given in each panel (top right), as well as the relative difference in the leading eigenvaules of the matrices and their Hermitian parts (bottom right). The spectra of the binned matrices capture general features of the original ones, such as a larger bulk of eigenvalues near the origin of the complex plane, and semicircular arcs composed of a handful of eigenvalues protruding from this bulk towards the left half plane. More quantitatively, one may consider the Kolmogorov-Smirnov distance (a number between 0 and 1, corresponding to the maximum vertical distance between two empirical cumulative distribution functions) between the original and binned matrices' eigenvalue distributions as a measure of how well the distributions approximate each other. Since the Kolmogorov-Smirnov distance is defined for univariate samples, we consider the distance between the real and the imaginary parts of the eigenvalues separately. Their numerical values are shown in the panel insets of Fig. 1 (top two lines). From the point of view of local stability, the leading eigenvalue (that with the largest real part) is of crucial importance: the sign of the real part of this eigenvalue determines whether the system is stable (negative) or unstable (positive). We consider the difference between the leading eigenvalues of the binned and original matrices to see how well it is captured. However, the raw difference itself is not informative, since the numerical value of this eigenvalue difference simply depends on the choice of units we measure the matrix entries in. A better question is how well the leading eigenvalue of the original matrix is approximated compared to the total spread of the eigenvalues; that is, can we say that the leading eigenvalue of the binned matrix is close to that of the original one, compared to the total range of the real parts of all the eigenvalues of the original matrix? These values are included in the panels under "D(Re)". We can see that, for the datasets presented, the binning approximation always errs on the conservative side, overestimating the actual leading eigenvalue. This conservatism is due to the fact that our method of matrix binning include both largest entries of the matrices as bins, and also the zero bin. The binning procedure will lump several entries into these extreme bin values. Therefore the variance of the entries of the binned matrix will in general exceed that of the original matrix, which leads to a higher variance in the eigenvalue distribution as well.1 Note that none of the matrices in Fig. 1 are stable. This is because the empirical studies which they are based on only document trophic links, with no data on self-interactions. Due to this lack of information, we set all diagonal entries to zero. However, since the sum of the diagonal entries of a matrix is also the sum of its eigenvalues, such matrices cannot be stable. Their instability is therefore likely an artifact of our ignorance rather than an actual phenomenon. However, since we are interested in the ability of binned matrices to approximate spectra (whether they are stable or not), this lack of information is not a problem for us here. Just as with stability, we can also examine reactivity, measured by the leading eigenvalue of $A$'s Hermitian part $H(A) = (A + A^{\top})/2$; $A^{\top}$ is the transpose of $A$. We therefore calculate $H(A)$ and $H(B)$, and ask how well the leading eigenvalue of the latter approximates that of the former in comparison with the total spread of the eigenvalues of $H(A)$. The results, for the empirical webs of Fig. 1, are shown under "D(H)". Let us highlight some of the general conclusions from these motivating examples. First, the eigenvalue distributions of the original and binned matrices roughly overlap: their means and variances are very similar in both the real and imaginary directions. Second, some of the finer structure of the eigenvalues distributions is also captured: more eigenvalues near the origin, and a few ones protruding in arcs toward the left half plane. Third, despite these features, the match between the actual and predicted values for stability and reactivity is not always very close—cautioning that the approximation could fail to capture specific numerical estimates for these properties. The method therefore reveals the general features of the system, and gives a rough idea of the quantitative details. Randomly generated interaction matrices Here we test the binning procedure on randomly generated interaction matrices. For each matrix we first fixed the number of species $S$ and the connectance $C$. We then generated two uniform random numbers between 0 and 1 to determine the types of all nonzero interactions. The first number is the fraction of trophic interactions, the second the fraction of mutualistic ones out of those that were nontrophic (the rest of the interactions were designated competitive). For each of the three interaction types there was an associated probability distribution from which the matrix entries pertaining to the interaction type in question were drawn. The procedure for generating the probability distribution was the same in all cases: first, the shape of the probability distribution was determined: either lognormal or Gamma. (We did this to check the robustness of our results to the choice of the underlying distribution. The results are indeed robust; see Supporting Information, Figs. S23-S26). Second, the mean and standard deviation of the distribution were uniformly sampled: for the mean, from $[0.1, \; 10]$, and for the standard deviation, from $[1, \; 10]$. For the trophic interactions, a random conversion efficiency was uniformly sampled from $[0.05, \; 0.2]$ to take into account the limited energy flow between trophic levels. To set the diagonal of the matrix, we sampled each diagonal entry from either a lognormal or a Gamma distribution, times $-1$ to keep the diagonal entries negative. The mean and standard deviation of the distribution were randomly drawn as in the offdiagonal case. See the Supporting Information for a more detailed breakdown of our method for generating these matrices. We repeated the parameterization for four different values of the species richness $S$ ($50$, $100$, $250$, and $500$) and of the connectance $C$ ($0.1$, $0.25$, $0.5$, and $1$), in all possible combinations. Then, $300$ replicates of all cases were generated. All resulting matrices were binned, based on all possible combinations of the following three variables. First, the number of bins was chosen to be either $3$, $5$, $7$, or $9$. Second, the binning resolution was set to $2$, $4$, $6$, $10$, and $14$. Third, we took into account the effect of accidentally misclassifying matrix entries. In empirical situations, it seems likely that strong interactions may accidentally be deemed weak (or vice versa), given the insufficient, qualitative information one uses to generate approximate matrices. We therefore explored what happens when we deliberately misclassify some fraction of the matrix entries. In doing so, we assumed that zero interaction strengths do not get misclassified (absent interactions cannot ever be observed, and so will not be accidentally classified as present), and that the bin category of a nonzero interaction can only move up or down one bin (a strong interaction may be misclassified as weak, but not as zero). We used three rates of misclassification: $0\%$, $10\%$, and $20\%$. Our results were not strongly affected by misclassification (Supporting Information, Figs. S1-S5). Here in the main text we always show results with a $10\%$ misclassification rate. First, we explored how accurately the leading eigenvalue is captured depending on the number of bins $k$, the binning resolution $b$, and the species richness $S$ (Fig. 2). It is apparent that $b=4, \, 6$ and $k=7, \, 9$ yield the best results. For instance, $90\%$ of all results with $b=4$ and $k=7$ have a difference between $r_B$ and $r_A$ less than $16\%$ of $\sigma_A$, the total spread of the real parts of $A$'s eigenvalues. Higher values of $k$ leading to better predictions was expected—the more bins we use, the more accurate the predictions will be. Notice though that there are diminishing returns. In a sense this is fortunate, because using more than 7-9 bins is probably not feasible in practice (seven bins means that, apart from zero, we have a "strong", a "medium", and a "weak" category both for positive and negative interactions). The result that $b \approx 4$ is optimal concerns the "best" choice for the definition of an order of magnitude. This suggests one should consider interactions sufficiently different if they differ by a factor of about four. Importantly, this choice proves to be consistently the best through various models and various metrics considered (see the next section and the Supporting Information). Figure 2 Figure 2. Box plots of how the leading eigenvalues of randomly generated community matrices $A$ are captured by those of their binned counterparts $B$. Interpretation of the box plots: median (lines), $5\%$ to $95\%$ quantiles (boxes; note that they encompass $90\%$ instead of the usual $50\%$ of the data), and ranges (whiskers). Each matrix is binned with a $10\%$ misclassification rate. Rows correspond to different values of the binning resolution; columns to different numbers of bins. The data in each panel are separated based on species richness. Panel ordinates show the difference between the leading eigenvalue $r_A$ of the original and $r_B$ of the binned matrices relative to $\sigma_A$, the total range of the real parts of the original matrix's eigenvalues. Instead of the relative, quantitative measure of how well the leading eigenvalue is approximated, we may also ask how often is it true that if a matrix $A$ is stable, then its binned counterpart $B$ is also stable. Of most interest are those matrices whose leading eigenvalues lie close to the imaginary axis, since in these cases a small perturbation to the spectrum may in principle change their stability properties. Once again, being "close" to the imaginary axis should not be measured on an absolute scale, since any given distance is unit-dependent, and the binning procedure is itself scale-invariant (Section 2.). The relevant question is whether the leading eigenvalue is close to the imaginary axis compared to the total spread of all eigenvalues; i.e., whether $|r_A| / \sigma_A$ is small. The result (Supporting Information, Figs. S19-S20) depends on $b$ and $k$; for example, when using only three bins, one is more likely to misjudge stability than not. However, for $b=4$ and $k=7$, of those results for which $|r_A| / \sigma_A < 0.05$, stability is accurately predicted in $90\%$ of all cases. And, for $|r_A| / \sigma_A < 0.1$, this accuracy increases to $97\%$, after which it quickly approaches $100\%$, always increasing. Similarly to the leading eigenvalue, one can also look at reactivity and how well it is predicted (Supporting Information, Figs. S3-S5). Just as before, $b=4, \, 6$ and $k=7, \, 9$ provide the best approximations, with more than $90\%$ of all results falling within $23\%$ of the total spread of the eigenvalues of $H(A)$, the Hermitian part of $A$ (Supporting Information, Figs. S21-S22). We also consider the effect of changing the connectance on the efficiency of binning—it turns out however that this effect is neither systematic nor very strong (Supporting Information, Figs. S11-S18). The Allometric Trophic Network The previous section explored the effects of matrix binning on randomly generated interaction matrices. Here we consider a mechanistic model of multispecies communities, the Allometric Trophic Network [7]. In this model there are a number of noninteracting abiotic resources (here we assume there are two), primary producers utilizing those resources, and consumers eating either the producers or other consumers. The feeding network is generated using the niche model [27]. Consumers interact with their resources via generalized functional responses, which may include consumer interference. These interaction terms are functions of the organisms' average body masses, calculated based on species' trophic levels and simple allometric relationships. We generated $10,000$ different communities using the Allometric Trophic Network model. In each simulation, we started out from a food web generated by the niche model, with 50 initial species whose abundances were uniformly distributed between $0.05$ and $0.2$. We followed the methodology described in [7] in every aspect to parameterize the model, except in choosing the Hill exponents for the trophic interactions: instead of randomly generating it for every interaction, we assumed it had the constant value of 2. This was done to make the system converge to a fixed point instead of a limit cycle or chaotic attractor, which is important because we are interested in predicting local asymptotic stability (indeed, in our simulations we only ever observed convergence to a fixed point). The model was run until equilibrium was reached, at which point we calculated the Jacobian to obtain the matrix $A$ (see the Supporting Information for a detailed description of our methods). Fig. 3a shows a spectrum generated by the procedure, along with that of its binned counterpart ($b=6$, $k=7$). In the particular simulation shown, 24 species and both abiotic resources survived to stably coexist out of the initial 50 species and two resources. The number of species plus resources persisting at equilibrium was variable between runs, and approximately normally distributed with mean 13.6 and standard deviation 3.0. Figure 3 Figure 3. Spectra of various matrices (red dots) and their binned counterparts (blue circles). Panel (a): Jacobian matrix obtained at a stable equilibrium of one particular run of the Allometric Trophic Network model, binned with binning resolution $b=6$ and $k=7$ bins. Panel (b): Random matrix with independent uniformly distributed entries between $-1$ and $1$. It is binned with three bins $(-1, \; 0, \; 1)$. Panel (c): the same random matrix, but binned with five bins $(-1, \; -0.5, \; 0, \; 0.5, \; 1)$. The matrices were subsequently binned with the binning resolution $b$ running through 2, 4, 6, 10, and 14; the number of bins $k$ taking on the values 3, 5, 7, and 9, and the rate of misclassification being either $0\%$, $10\%$, or $20\%$. Since the number of species one ends up with is highly variable in this model, we do not factor the results based on the number of species. Except for the case with 3 bins, the leading eigenvalue is captured well (Fig. 4), with $90\%$ of all other cases having a relative error less than $13\%$, and in some cases less than $6\%$ (e.g., for $b=4$, $k=7$). Similarly, reactivity (Supporting Information, Fig. S8-S10) is captured with relative error less than $12\%$ in $90\%$ of all cases with $b=4$, $k=7$, and misclassification rate $10\%$. Figure 4 Figure 4. Box plots of how the leading eigenvalues of community matrices $A$ are captured by their binned counterparts $B$, where the $A$s are generated by the Allometric Trophic Network model. The figure is organized just like Fig. 2, except the data in the panels are not separated based on species richness. Theoretical underpinning Here we connect the matrix binning procedure with more rigorous mathematical concepts, to give a theoretical underpinning to why and when the method is supposed to work. We employ two arguments, one based on the theory of random matrices, the other on the concept of pseudospectra. Random matrices Although empirical interaction webs are manifestly not random, an intuition for why the matrix binning procedure works may be gained by connecting it with random matrix theory [3]. There are several results in the theory of random matrices concerning the distribution of matrix eigenvalues in the complex plane, such as the circular [10] and elliptic [22] laws, which have found ecological applications as well [1]. Here we only consider the simplest version of the circular law; more complexity can be incorporated analogously (see Supporting Information). Suppose the entries of the $S \times S$ matrix $A$ are drawn independently from the same underlying probability distribution $p_A(x)$, which has mean zero and variance $V_A$. Then the law states that for $S$ large, the eigenvalues are uniformly distributed in a circle of radius $\sqrt{S V_A}$ in the complex plane, centered at the origin. Note that the circle's radius only depends on the variance of $p_A(x)$ but not its shape. This important property [25] means that two completely different underlying probability distributions will lead to the same eigenvalue distribution as long as their mean is zero and their variances are equal (in the limit of $S$ going to infinity, the distributions would converge to be exactly the same; for $S$ large but finite, there are slight but negligible differences). Even if the variances are not equal, the only difference between the eigenvalue distributions will be in the radii of the circles within which the eigenvalues are found. The key idea concerning matrix binning is as follows. Consider a random matrix $A$ whose entries are drawn from some distribution $p_A(x)$. We create its binned counterpart $B$. But the binned matrix $B$ is just another random matrix with a different underlying probability distribution: we essentially replace the original $p_A(x)$ with a discrete distribution $p_B(x)$, one which we can calculate from $p_A(x)$ and the bin positions. We will then know the eigenvalue distribution of $B$ as well, since that only depends on $p_B(x)$'s variance $V_B$. The spectra of $A$ and $B$ may then be compared analytically. As an example, let the entries of $A$ come from the uniform distribution $p_A(x) = \mathcal{U}[-1, 1]$, which has variance $V_A = 1 / 3$. Let us bin $A$ with three bins $(-1, \; 0, \; 1)$. The probability, on average, of any one entry being lumped into the $-1$ or $1$ bins is $1/4$, while the probability of being lumped into the $0$ bin is $1/2$, defining the discrete distribution $p_B(x)$. This distribution has variance $V_B = 1 / 2$. The eigenvalues of $A$ are therefore uniformly distributed in a circle of radius $r_A = \sqrt{S / 3}$, and those of $B$ in a circle of radius $r_B = \sqrt{S /2}$ (Fig. 3b). If we now ask how well the leading eigenvalue is approximated, we first note that they are simply given by $r_A$ and $r_B$ (since the eigenvalues fall in a circle). We therefore take the ratio of the two radii to assess the goodness of the approximation: $r_B / r_A = \sqrt{3 / 2} \approx 1.22$. The binned matrix overestimates the leading eigenvalue by this factor. One could repeat the analysis with a more refined binning scheme, for instance with $k = 5$ and $b = 2$. We then have five bins $(-1, \; -0.5, \; 0, \; 0.5, \; 1)$ instead of the original three, leading to $V_B = 3/8$ (see Supporting Information). Then the ratio of the circles' radii (and that of the leading eigenvalues) is $r_B / r_A = \sqrt{V_B / V_A} = \sqrt{9/8} \approx 1.06$, a near-perfect match brought about by the refinement of the binning resolution (Fig. 3c). In summary, certain classes of random matrices allow for a simple analytical evaluation of the effects of matrix binning. Although real-world matrices are not going to conform to true random matrices exactly, there are nevertheless good reasons to make use of them anyway. Most importantly, random matrix theory serves as a theoretically well-understood benchmark, a reference model, which, when not fitting real-world data, reveals properties of the empirical system that are causing the departure from the random expectation, thus facilitating a better understanding of the system. Quite apart from this justification, random matrix theory actually does have some success in interpreting empirical data as well [24], suggesting its use may prove more than purely theoretical. Pseudospectra The spectrum of a matrix $A$ is the set of complex numbers that are eigenvalues of $A$. In contrast, its $\epsilon$-pseudospectrum [26] is the set of complex numbers that are eigenvalues of all possible perturbed matrices $A + P$, with $\| P \| <\epsilon$ (the matrix norm $\| P \|$ is defined as the square root of the largest eigenvalue of $P^* P$, where $P^*$ is the conjugate transpose of $P$). Whereas the spectrum is composed of discrete points in the complex plane, the $\epsilon$-pseudospectrum comprises of the union of regions of various sizes around the original eigenvalues. See the Supporting Information for an algorithm to compute pseudospectral regions. An important result [26, Theorem 2.2] states that the $\epsilon$-pseudospectrum of normal matrices (i.e., matrices $A$ for which $A^* A = A A^*$) is the union of circular disks of radius $\epsilon$ around $A$'s unperturbed eigenvalues. Moreover, such matrices have the smallest possible pseudospectra. Any deviation from normality will increase the size of this set, with strongly nonnormal matrices potentially having very large pseudospectral regions even for small values of $\epsilon$. Pseudospectra provide a rigorous and general measure of the effect of perturbations on the eigenvalues of matrices. Importantly, the binning procedure can be thought of as applying a certain perturbation $P$ to the underlying community matrix $A$, with $A + P = B$, where $B$ is the binned matrix. The pseudospectrum reveals how sensitively the eigenvalues respond to the perturbation induced by binning. In Fig. 5 the blue regions show the $\epsilon$-pseudospectrum for each of the nine empirical webs of Fig. 1, with $\epsilon = \| P \|$ being the appropriate perturbation norm for each web; the red dots are the unperturbed eigenvalues of $A$. Figure 5 Figure 5. Pseudospectra of the empirical matrices of Fig. 1. The red dots show the original eigenvalues. The blue regions show the $\epsilon$-pseudospectra with $\epsilon$ equal to the norm $\| B - A \|$, where $B$ is the binned and $A$ is the original matrix. The red regions are what the same pseudospectra would look like if the matrices were normal ($A^* A = A A^*$). The numbers in the panel titles are the given matrix's scaled departure from normality; see the Supporting Information for calculating these values. Pseudospectra measure the union of the effects of all possible perturbations of a given norm, which is why the blue regions are much wider than the positions of the binned eigenvalues on Fig. 1 would warrant them to be. Importantly, since normal matrices are the least sensitive to perturbations, the comparison of the actual pseudospectrum with the smaller one that would have been obtained if the matrix had been normal carries useful information: pseudospectral regions much larger than the one obtained under the assumption of normality signal a matrix whose spectrum is oversensitive to perturbing its entries. The red regions in Fig. 5 were computed as the union of disks of radius $\epsilon = \| P \|$, i.e., it is what the pseudospectrum would look like if the matrices were in fact normal. Since the blue regions barely exceed the red ones, the empirical matrices are "almost normal" and therefore their spectra are not overly sensitive to perturbations of their entries. Calculating pseudospectra is straightforward but computationally expensive. In the Supporting Information we therefore introduce a much simpler metric, the scaled departure from normality $\text{depn}(A)$, which characterizes matrix sensitivity with a single number such that $\text{depn}(A) \le 1$ guarantees low sensitivity to perturbations. The value of $\text{depn}(A)$ for each empirical matrix is reported in the panel titles of Fig. 5. All are significantly lower than one, implying their spectra are not sensitive to perturbations—in line with what we see on their pseudospectra. Our results show that certain global community properties, such as stability [15] and reactivity [20], can be predicted using crude, order-of-magnitude estimates of community matrix entries. These properties depend, broadly speaking, on the distribution of eigenvalues (stability depends only on the leading eigenvalue; reactivity on their whole ensemble), which was reasonably captured by the crude approximation. We gave theoretical justification to when and why this would be so, one based on the theory of random matrices, the other on the concept of pseudospectra. To check the robustness of our method, we applied it to three very different scenarios: empirical interaction webs parameterized via allometric relationships [24], randomly generated matrices, and those generated by the Allometric Trophic Network model [7]. Though the degree to which the method produced accurate results was situation-dependent, on the whole, it was able to make reliable predictions in each of these cases. This predictability appears to be at variance with earlier work emphasizing that even small errors in measuring the entries of the community matrix translate into large errors of prediction [29,30,8,19,21,2]. If the spectrum of the community matrix is well approximated, why would this be the case? The answer, we believe, is that the response of species to press perturbations depends on the inverse spectrum. Due to the preponderance of rare species in natural communities, such systems are necessarily close to a transcritical bifurcation point (small perturbations of the abundances may drive rare species extinct), implying that some eigenvalues are close to zero, making the inverse overly sensitive to measurement errors. One may try to ignore the rare species from a community model, reasoning that—since they are rare—their impact on the community is slight. But unfortunately, due to the general shape of the species-abundance curve [17], there is no natural cutoff point for doing that. Therefore, one should try to approximate properties that do not depend on inverting the community matrix. One property we have not yet mentioned is feasibility, i.e., whether a community equilibrium has all-positive species abundances. The stability or reactivity of unfeasible equilibria is of no relevance. The reason we did not consider feasibility separately is that we treat the community matrix as the linearization of some arbitrary nonlinear dynamics around some equilibrium state we observe in nature. Its feasibility is therefore already guaranteed by the fact that we are observing the system. Apart from this, note that feasibility is a property that depends on the inverse problem. For instance, in a simple Lotka–Volterra model given by $\mathrm{d} n /\mathrm{d} t = n \circ \left( b + An \right)$ (where $n$ is the vector of densities, $b$ the vector of intrinsic growth rates, $A$ the matrix of interaction coefficients, and $\circ$ denotes the Hadamard or element-by-element product), the equilibrium densities are given by $n = -A^{-1} b$. Therefore, as discussed, our method is ill-suited for determining feasibility to begin with. The accuracy of prediction was dependent on the number of bins $k$ the entries were classified into (fewer bins meant larger errors), and on the choice of the binning resolution $b$. In practice, a $k$ of seven to nine is probably the largest feasible number, since beyond this it becomes increasingly difficult to assign weak interactions to correct bins. For the binning resolution $b$, we found that values beyond $10$ gave significantly worse results; $b\approx 4$ was usually optimal, but the sensitivity of the results was not very great, and $b=2$ and $b=6$ performed similarly (Figs. 2, 4, S4, S9). Interestingly, we have found $b \approx 4$ to perform the best regardless of whether we looked at the empirical data, the randomly generated webs, or the Allometric Trophic Networks, and regardless of whether we estimated stability or reactivity. It may seem counterintuitive that the smallest value of $b$ is not the most accurate in recovering matrix properties. After all, the finer the resolution, the more accurate the binning. The reason is that, because the number of bins is finite, this finer resolution only applies to larger matrix entries but not necessarily to smaller ones. Consider the following example. A $100 \times 100$ matrix is generated by uniformly sampling all but two of its entries from $[-2, \; 2]$, and then setting the remaining two entries to $-8$ and $8$, respectively. If we bin this matrix with $b = 2$ and $k = 5$, the bins are $(-8, \; -4, \; 0, \; 4, \; 8)$. This means that all but the two outliers will be classified in the binned matrix as zero—a crude estimate if there ever was one. However, for $b = 4$ the bins become $(-8, \; -2, \; 0, \; 2, \; 8)$, resolving the underlying data much better. In the end, the best binning is, of course, achieved for $b \rightarrow 0$ and $k \rightarrow \infty$. Since this is not feasible in practice, one has to find the best compromise between a $b$ that is not too large but cannot be too small, and a $k$ that is not too small but cannot be too large. We also checked what happens if, in estimating the strongest interactions $p$ and $n$, we use $10$ empirically measured results and take their average. This way, a single very strong interaction that dominates the system will not artificially distort the binning. However, the results proved insensitive to doing this. The implication is that one should concentrate expensive and time-consuming empirical effort wisely: very accurate measurement of a couple of interaction coefficients does not improve predictive power much, while qualitative knowledge of many matrix entries does. It is easy to think of scenarios in which the binning procedure produces grossly inaccurate results. For instance, we have seen that matrices with $\text{depn}(A) \gg 1$ will have sensitive spectra, therefore even relatively slight perturbations of their entries (e.g. introduced by binning) may lead to large changes in their eigenvalue structure. We emphasize however that our metric for the scaled departure from normality is merely a proxy for matrix sensitivity: the complete picture is gained by looking at the full pseudospectrum. But the binning procedure may be inaccurate even when the degree of nonnormality is low. If the interaction strengths span too many orders of magnitude or are dominated by a small number of very large matrix entries, then even a reasonably fine-grained binning structure may classify all but the handful of very large entries as zero, leading to an eigenvalue distribution wildly different from the actual one. The reason why low nonnormality does not matter in this case is that the perturbation induced by binning is in fact very large. In practice however, whenever a few interactions dominate the system, instead of binning, one should concentrate just on those very large entries to gain insight into its workings. In other words, such systems naturally require different methods of analysis than the one presented here, which works better for complex interaction structures where many interactions together shape the properties of the system. Our results also shed light on the classic stability-complexity debate from a slightly different angle. The original "conventional wisdom" was that more complex systems (i.e., ones with more species, higher variance in the strength of interactions, and higher connectance) would be more stable in the face of perturbations [5, p. 586]. This view was challenged by the classic result of [14] which argued that, all else being equal, more complex systems have a lower probability of being stable: the eigenvalues of the community matrices of more complex systems are less likely to reside in the left half of the complex plane. Whether May's argument actually poses a true challenge to the conventional wisdom has been heavily debated [16]. Our findings contribute to this debate by showing that large, complex systems, if not necessarily more stable, are very robust against perturbations of the community matrix entries. The binning of a matrix can be thought of as a structural perturbation of the system: we are altering the matrix entries, changing the strength of interactions between species. Since, as we have seen, the binning of large complex interaction matrices has only a small effect on the spectrum, the perturbation induced by binning does not have a large effect on the system's large-scale properties. For instance, the stability and reactivity properties were unchanged. This means that if the system was actually stable, it was likely to stay that way, and conversely, unstable systems remained unstable after the perturbation induced by binning. In fact, the robustness interpretation of the stability-complexity debate is closer to its original formulation, where it was argued that the more pathways there are in an interaction web, the less it matters if one of those links is lost, since other pathways will compensate for the loss [13]. In summary, the results from approximating system properties using semiquantitative information point in the direction of using such approximations in practice, where obtaining precise quantitative information is hard, expensive, and time-consuming. The presented results show that such crude parameterization of complex systems may still reveal important global system properties of interest. We thank A. Golubski, J. Grilli, M. J. Michalska-Smith, and E. L. Sander for discussions, and B. Althouse and four anonymous reviewers for their helpful comments. This work was supported by NSF #1148867. Allesina, S., Tang, S., 2012. Stability criteria for complex ecosystems. Nature 483, 205–208. Aufderheide, H., Rudolf, L., Gross, T., Lafferty, K. D., 2013. How to predict community responses to perturbations in the face of imperfect knowledge and network complexity. Proceedings of the Royal Society of London Series B 280, 20132355. Bai, Z., Silverstein, J. W., 2009. Spectral analysis of large dimensional random matrices. Springer. Barabás, G., Pásztor, L., Meszéna, G., Ostling, A., 2014. Sensitivity analysis of coexistence in ecological communities: theory and application. Ecology Letters 17, 1479–1494. Begon, M., Townsend, C. R., Harper, J. L., 2005. Ecology: From Individuals to Ecosystems. {F}ourth edition. Blackwell Science Publisher, London. Bender, E. A., Case, T. J., Gilpin, M. E., 1984. Perturbation experiments in community ecology: Theory and practice. Ecology 65, 1–13. Berlow, E. L., Dunne, J. A., Martinez, N. D., Stark, P. B., Williams, R. J., Brose, U., 2009. Simple prediction of interaction strengths in complex food webs. Proceedings of the National Academy of Sciences USA 106, 187–191. Dambacher, J. M., Li, H. W., Rossignol, P. A., 2002. Relevance of community structure in assessing indeterminacy of ecological predictions. Ecology 83, 1372–1385. Elton, C. S., 1958. The Ecology of Invasions by Animals and Plants. Methuen, London. Girko, V. L., 1984. The circle law. Theory of Probability and its Applications 29, 694–706. Levins, R., 1968. Evolution in changing environments. Princeton University Press, Princeton. Levins, R., 1974. Qualitative analysis of partially specified systems. Ann. NY Acad. Sci. 231, 123–138. MacArthur, R. H., 1955. Fluctuations of animal populations and a measure of community stability. Ecology 36, 533–536. May, R. M., 1972. Will a large complex system be stable? Nature 238, 413–414. May, R. M., 1973. Stability and Complexity in Model Ecosystems. Princeton University Press, Princeton. McCann, K. S., 2000. The diversity-stability debate. Nature 405, 228–233. McGill, B. J., Etienne, R. S., Gray, J. S., Alonso, D., Anderson, M. J., Benecha, H. K., Dornelas, M., Enquist, B. J., Green, J. L., He, F., Hurlbert, A. H., Magurran, A. E., Marquet, P. A., Maurer, B. A., Ostling, A., Soykan, C. U., Ugland, K. I., White, E. P., 2007. Species abundance distributions: moving beyond single prediction theories to integration within an ecological framework. Ecology Letters 10, 995–1015. Meszéna, G., Gyllenberg, M., Pásztor, L., Metz, J. A. J., 2006. Competitive exclusion and limiting similarity: a unified theory. Theoretical Population Biology 69, 68–87. Montoya, J. M., Woodward, G., Emmerson, M. C., Solé, R. V., 2009. Press perturbations and indirect effects in real food webs. Ecology 90, 2426–2433. Neubert, M. G., Caswell, H., 1997. Alternatives to resilience for measuring the responses of ecological systems to perturbations. Ecology 78, 653–665. Novak, M., Wootton, J. T., Doak, D. F., Emmerson, M., Estes, J. A., Tinker, M. T., 2011. Predicting community responses to perturbations in the face of imperfect knowledge and network complexity. Ecology 92, 836–846. Sommers, H. J., Crisanti, A., Sompolinsky, H., Stein, Y., 1998. Spectrum of large random asymmetric matrices. Physical Review Letters 60, 1895–1898. Tang, S., Allesina, S., 2014. Reactivity and stability of large ecosystems. Frontiers in Ecology and Evolution. \newline\urlprefixhttp://dx.doi.org/10.3389/fevo.2014.00021 Tang, S., Pawar, S., Allesina, S., 2014. Correlation between interaction strengths drives stability in large ecological networks. Ecology Letters 17, 1094–1100. Tao, T., Vu, V., Krishnapur, M., 2010. Random matrices: Universality of esds and the circular law. Annals of Probability 38 (5), 2023–2065. Trefethen, L. N., Embree, M., 2005. Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators. Princeton University Press, New Jersey, USA. Williams, R. J., Martinez, N. D., 2000. Simple rules yield complex food webs. Nature 404, 180–183. Woodward, G., Speirs, D. C., Hildrew, A. G., 2005. Quantification and resolution of a complex, size-structured food web. Advances in Ecological Research 36, 85–135. Yodzis, P., 1988. The indeterminacy of ecological interactions as perceived through perturbation experiments. Ecology 69, 508–515. Yodzis, P., 2000. Diffuse effects in food webs. Ecology 81, 261–266. 1. Let $A$ be a normalized matrix, with entries having zero mean and unit variance. Denote its eigenvalues by $\lambda_A$ and their variance by $\text{Var}(\lambda_A)$. Now let $B = \alpha A$ with $\alpha > 0$ a scaling constant. $B$'s entries then have zero mean and variance $\alpha^2$, its eigenvalues are $\lambda_B = \alpha \lambda_A$, and their variance is $\text{Var}(\lambda_B) = \text{Var}(\alpha \lambda_A) = \alpha^2 \text{Var}(\lambda_A)$. In other words, increasing/decreasing the variance of the matrix entries increases/decreases the variance of the eigenvalues as well. György Barabás, Stefano Allesina . "Predicting global community properties from uncertain estimates of interaction strengths - Version 1". SJS (20 Jul. 2017)
CommonCrawl
Kinetic and mechanistic study of sulfadimidine photodegradation under simulated sunlight irradiation Zhineng Hao1, 2, Changsheng Guo1, Jiapei Lv1, Yan Zhang1, Yuan Zhang1 and Jian Xu1Email authorView ORCID ID profile Environmental Sciences Europe201931:40 Accepted: 25 June 2019 The extensive uses of sulfadimidine (SDMD) resulted in its presence in water bodies, and subsequent posed risks to eco-environment and human health. In this study, photodegradation of SDMD in water was studied under UV–Visible irradiation. The intermediates, degradation pathways of SDMD photodegradation and ecological risk of SDMD were investigated as well. SDMD was rapidly degraded under alkaline conditions. Nitrate ion enhanced SDMD degradation under UV–Vis irradiation, while dissolved organic matter and Fe(III) inhibited its decay, and bicarbonate ion did not exert any effect. The reactive species involved in the SDMD photodegradation was singlet oxygen. Four major transformation products were identified by high-performance liquid chromatography–mass spectrometry (HPLC–MS), and the photolytic pathway was also proposed. Photoinduced hydrolysis, desulfonation and photooxidation were the major photodegradation mechanisms for SDMD. Toxicity analysis with Vibrio fischeri showed an obvious decrease in toxicity of the reaction solution, from the initial inhibition rate of 38.5% to 0% after 150-min irradiation. Initial pH and common water constituents influence the photo-degradation of SDMD under UV–Vis irradiation. Photodegradation of SDMD could reduce its ecological risk in the aqueous solution. Sulfadimidine The ubiquitous occurrence of pharmaceuticals and personal care products (PPCPs) in the environment has now been recognized as a new environmental problem and has aroused increasing concern on their fate and risks [1–5]. In the aquatic ecosystem, the majority of PPCPs tends to absorb light due to their structural characteristics containing functional groups like aromatic rings and heteroatoms, making them prone to undergo photolysis under UV–Vis irradiation [6–8]. Pharmaceuticals may be directly photodegraded, which are converted into excited states and subject to chemical transformation as a result of photo-absorption [9]. Indirect photolysis is also an important photodegradation pattern for PPCPs, which happens because of the energy transference or reactions with transient reactive species in natural water arising from irradiation, including reactive oxygen species (ROS, for instance, ·OH and 1O2) and triplet excited states of natural organic matter (3NOM*) [8, 9]. In the aqueous environment, the water constituents (such as nitrate, bicarbonate, dissolved organic matter (DOM) and Fe(III)) are of great importance to the photochemical behavior of pollutants [10–12]. Nitrate can generate ·OH under light irradiation which is photoactive to the organic contaminants [10, 13–15]. Bicarbonate is documented to yield CO 3 −· through reacting with ·OH, and it could also prohibit the photo-transformation of organic contaminants because of the ·OH scavenging [14–16]. DOM, as the major form of organic carbon existing in surface water, has dual effects on the photodegradation of organic compounds. It can accelerate the photooxidation by generating oxidants such as HOO/O2−, ·OH, 1O2 and triplet excited-state DOM, and also can scavenge ROS or compete light absorption [11, 17, 18]. The photodegradation of organic compounds is accelerated by Fe(III) complexes by internal charge transfer to generate Fe(II) and hydroxyl radical, which could enhance the photodegradation and serve as the catalytic oxidant [10, 17, 19]. Sulfonamides belong to antibiotics which contain aromatic rings in their structure, with potential to absorb light and undergo photochemical degradation under irradiation [8, 20]. Many researchers have reported the photodegradation of sulfonamides, including sulfamethoxazole, sulfamethazine, and sulfadimethoxine in aqueous environment [16, 18, 21–24]. Albeit with similar structures, these antibiotics undertake different photodegradation behaviors [8, 21]. Sulfadimidine (SDMD) is a sulfonamide antibiotic which has long been used for prophylactic or therapeutic purposes in animal production [25]. Due to its high water solubility and mobility, SDMD has been widely detected in various environmental matrices, with concentrations up to 323 ng/L in water [25–27], up to 20 mg/kg in animal manure [26, 28] and 15 μg/kg in agriculture soils [26, 27]. The runoff concentration from manured plots could reach 680 μg/L with 1-day contact time [26, 28]. Generally, sorption and photodegradation processes governed sulfamethazine fate in freshwater–sediment microcosms [29], and the SDMD photodegradation have been studied in several studies [7, 30–32]. However, few studies have been solely conducted to investigate its photodegradation behaviors, and its photodegradaion products and mechanisms remained unclear. In this study, we investigated the photochemical degradation of SDMD in aqueous solution under UV–Vis irradiation. The experiments were conducted under different conditions including different pH values and different levels of water constituents. The degradation intermediates/products were identified by high-performance liquid chromatography–mass spectrometry (HPLC–MS), and tentative degradation pathways were proposed. Bioassay with Vibrio fischeri bacteria was carried out to test the acute toxicity variation of SDMD during its photodegradation process. Sulfadimidine (purity > 99%) and humic acid sodium salt (HA) were purchased from Sigma Aldrich (St. Louis, MO, USA). Methanol and isopropanol (HPLC grade) were obtained from Tedia Company, Inc. All other analytical-grade chemicals were used without further purification. Photodegradation experiments A Hg lamp (300 W) and a xenon lamp (800 W, Institute of Electric Light Source, Beijing) placed in the cold trap were employed to simulate UV–Vis and sunlight irradiation. The photodegradation experiment was performed in the photochemical reactor system (XPA-7, Nanjing Xujiang Machinery Factory, Nanjing, China) with the main apparatus containing cylindrical quartz well for the UV irradiation (λ > 200 nm) and Pyrex well for the sunlight irradiation (λ > 290 nm). The light source irradiance spectra (Fig. 1) were measured by a fiber-optic spectrometer (Ocean Optics, USB2000-FLG), and the light intensities were measured with a full spectrum bright light power meter (CEL-NP2000, Beijing Zhongjiaojinyuan Technology Limited company) in the center of the solutions with 3.85 mW/cm2 and 4 mW/cm2 for the mercury and xenon lamp, respectively. The relatively stable photon flux (< 5%) confirmed the stable irradiance during the photodegradation experiment [33]. The SDMD absorption spectra (Additional file 1: Figure S1) under different pH condition were determined by UV–Vis spectrophotometer (Varian Cary 100). UV absorption spectra of SDMD and irradiance spectra of the light sources The pH value of the solution was adjusted with HCl or NaOH. Quartz tubes (60 mL) containing 50 mL of reaction solution ([SDMD]0 = 10 mg/L) were placed in the photochemical reactor system and magnetically stirred. Two milliliters of reaction solution were sampled at specific time intervals. To explore the effects of pH and water compositions [nitrate, bicarbonate, humic acid (HA) and Fe(III)] on SDMD photodegradation, the reagents with serial and concentration gradient at specific pH were added to the reaction solutions. To examine the reactive species involved in SDMD photodegradation, scavenging experiments were performed using isopropanol as the quencher of ·OH [34] and sodium azide (NaN3) as the quencher of 1O2 and ·OH [34, 35]. Dark control experiments were performed in the same procedures with quartz tubes wrapped with aluminum foils. Triplicate experiments were conducted for all conditions. Products/intermediates analysis An Agilent 1200 HPLC equipped with a diode array detector was employed to analyze SDMD concentrations, with the absorbance wavelength at 261 nm. Compounds were separated by an Agilent Zorbax Eclipse XDB-C18 column (100 mm × 2.1 mm, 3.5 μm). 30% methanol and 70% water with 0.1% formic acid were used as the mobile phases. The flow speed was maintained at 1.0 mL/min. The photodegradation products were identified by LC–MS (Quest LCQ Duo, US) equipped with an electrospray ionization (ESI) source and operated in the positive ion mode (ESI+) with the mass ranging from 50–500 m/z. The LC separation was performed using an eclipse XDB-C18 column (150 mm × 2.1 mm, 5 μm) with a mobile phase of acetonitrile (A) and water (B) (with 0.1% formic acid) at a flow of 0.3 mL/min. The column temperature was kept at 40 °C, and the gradient was as follows: 0–5 min: 90% B; 5–7 min: 85% B; 7–11 min: 60% B; 11–15 min: 10% B; 15–25 min: 90% B. The capillary voltage and cone voltage were 3.5 kV and 25 V, respectively. The desolvation temperature was 350 °C and source temperature was 120 °C. The flow of sheath gas was 7 L/min. Toxicity evolvement The toxicity of SDMD solution during photodegradation was evaluated with the bioluminescence inhibition test using Vibrio fischeri. The test was conducted with Microtox Toxicity Analyzer (Model 500), with the initial SDMD concentration at 10 mg/L. The luminescence was determined after incubation at 15 °C for 15 min. The inhibition of luminescence compared to a toxic-free control gives the percentage of inhibition, and was calculated following the established protocol using the Microtox calculation program [36, 37]. Briefly, the decrease in bacterial luminescence (Γ, %) due to the addition of toxic chemicals can be determined with the equation [36, 37]: $$ \varGamma \, = \, 100\, - \,\left( {\frac{{{\text{IT}}_{T} }}{{{\text{IT}}_{0} \, \times \,\left( {{{{\text{IC}}_{T} } \mathord{\left/ {\vphantom {{{\text{IC}}_{T} } {{\text{IC}}_{0} }}} \right. \kern-0pt} {{\text{IC}}_{0} }}} \right)}}} \right)\, \times \,100, $$ where IC0 and IT0 are the luminescence of control and test sample at t = 0. ICT and ITT are luminescence values for control and test samples measured after T minutes of exposure. The comparative experiment showed that no observable loss of SDMD was found in dark control, indicating that the SDMD degradation other than photolysis was negligible. Results also showed that under simulated sunlight irradiation (λ > 290 nm), SDMD did not photodegrade due to the weak absorption of light at λ > 290 nm (Fig. 1), which was consistent with previous observation [38]. In contrast, SDMD could be quickly photodegraded under UV–Vis (λ > 200 nm) irradiation. In this study, 300-W high-pressure mercury lamp was used to explore the SDMD photodegradation in aqueous solution. SDMD photodegradation at varying pH Figure 2 illustrated the photodegradation of SDMD in solution at different pH values. It showed that within 150-min UV–Vis irradiation, SDMD was almost completely eliminated under alkaline conditions. Linear regression between ln(Ct/C0) and time (t) indicated that photochemical reaction followed the pseudo-first-order kinetics (R2 > 0.98). The degradation rate constant k, half-live (t1/2) and R2 are summarized in Additional file 1: Table S1. Results indicated that SDMD in the alkaline solution was photolyzed more quickly than in acidic environment. The highest k value was 0.0363 min−1 at pH 9, which was much greater than the maximum degradation rate of 0.0179 min−1 in acidic solution at pH 2. This is likely due to the speciation of SDMD under different pH values influencing the absorption of light wavelength (Additional file 1: Figure S1). The pKa1 and pKa2 values of SDMD were 1.95 and 7.45, respectively [39]; thus, substrate anions with high electron density surrounding the ring structure under alkaline condition were much more reactive for photochemical reaction than their neutral or protonated species [40, 41]. The photodegradation of SDMD at different pH and their corresponding rate constants. [SDMD]0 = 10 mg/L Influence of different constituents Nitrate ion The effect of NO3− on the SDMD photodegradation is illustrated in Fig. 3a. In natural water bodies, the level of nitrate ion generally ranges from 10−5 to 10−3 mol/L [12]. In this study, the addition of NO3− slightly accelerated the SDMD removal rate. The first-order rate constant k increased from 0.032 min−1 (without NO3−) to 0.037 min−1 (10 mmol/L NO3−). It has been reported that the ubiquitous nitrate ion in natural water is the main source for ·OH under irradiation, which will further induce the photodegradation of organic compounds [13, 42]. The results suggested that ·OH could result in SDMD photodegradation. Since SDMD was mainly degraded through direct photolysis, indirect photolysis induced by ·OH only played a minor role in SDMD removal. In view of the nonselectivity of ·OH to react with organic pollutants and high reactivity to sulfonamides [43], ·OH formed in natural sunlit waters might play important roles in the photodegradation of SDMD and other sulfonamides. Effect of nitrate (a), bicarbonate (b), HA (c) and Fe(III) (d) on the photolysis of SDMD in aqueous solution. [SDMD]0 = 10 mg/L, pH = 8 Bicarbonate ion The effect of bicarbonate on SDMD photodegradation is shown in Fig. 3b, which suggested that the addition of bicarbonate did not exert any effect. Bicarbonate is also a ubiquitous ion in natural water, and its presence is of great importance to the photochemical reaction of organic compounds. Bicarbonate can cause approximately 10% of ·OH quenching as a radical scavenger [13], and it could also lead to the generation of carbonate radical (·CO3−). Compared with ·OH, ·CO3− is less reactive, and is conducive to the removal of easily oxidized substances in nature water [44, 45]. Due to the lower reactivity, ·CO3− in natural water was more steady than ·OH [14], and its effect on SDMD photodegradation was negligible. Dissolved organic matter (DOM) Figure 3c showed that SDMD photodegradation followed the pseudo-first-order kinetics in the presence of humic acid (HA), and HA had an inhibition impact on SDMD photolysis. As shown in Additional file 1: Figure S2, HA has a wide light absorption range of 200–700 nm. HA could compete with SDMD to absorb short-wavelength UV light in the solution, resulting in the inhibition of SDMD photodegradation. Its scavenging ability toward ·OH might be another reason for the inhibition effect. As a photosensitizer, HA may be conducive to photodegradation [46–48], while this result showed the minor role of photosensitization played in SDMD photodegradation. Iron (III) Iron is the abundant element in natural water environment [19]. In this study, three concentrations of FeCl3 (10 μmol/L, 20 μmol/L and 40 μmol/L) representing environmental levels were added into the reaction solution to evaluate its effect on SDMD photodegradation. As shown in Fig. 3d, the degradation of SDMD under UV–Vis irradiation was obviously decreased in the presence of Fe(III). As reported that Fe(III) could enhance the sulfadimethoxine photodegradation [18], in this experiment a reversed trend was observed for SDMD. This was mainly related to the iron speciation. Under acidic condition, Fe3+, FeOH2+, and FeSO4+ were the main dissolved forms of Fe(III), which were photoactive for the removal of organic chemicals by the photo-generated ·OH [49]. Under neutral or alkaline conditions, dimeric and oligomeric Fe(III) compounds and Fe(III) colloids were the dominant forms. Fe(III) colloids like ferric oxyhydroxides tend to prevail over other iron species at pH 8 [49], which would absorb or scatter light, and finally lead to less light received by SDMD in aqueous solution. Mechanisms of SDMD photodegradation To determine which reactive oxygen species was involved in the SDMD photolysis, NaN3, isopropanol and N2 were individually added and introduced into the reaction system. It has been reported that NaN3 quenches 1O2 and isopropanol quenches ·OH or O2·−, while purging N2 into the system can eliminate dissolved oxygen (DO) which is documented to quench the molecules from excited triplet state to unexcited state [50]. Results in Fig. 4 showed that the addition of NaN3 inhibited SDMD degradation, suggesting that 1O2 formed during photoreaction played an important role in SDMD photodegradation. Other ROS such as ·OH or O2·− may be not the key radicals involved in the photolysis process. Effect of NaN3, isopropanol and N2 on SDMD photolysis in solution. [SDMD]0 = 10 mg/L, pH = 8 Due to the high sensitivity, selectivity and efficiency, LC–MS has been widely used as a powerful tool in identifying and charactering drug metabolites [51]. Herein, the intermediates/products of SDMD photodegradation were identified with the retention time and LC/MS–ESI+ spectra. A total of eleven intermediates/products were identified, with detailed information in Additional file 1: Figures S3 and S4. To avoid unreliable analysis, these intermediates were compared to previous studies [8, 51]. Usually, the direct photodegradation products of sulfonamides arise from similar cleavage as shown in Additional file 1: Scheme S1, and cleavage at these positions are mainly involved in photohydrolysis and desulfonation processes [11, 20, 51–53]. As shown in Additional file 1: Figures S3 and S4, the direct photolysis of SDMD generated several photoproducts which were also observed by others [18, 51–53]. The direct cleavage processes generated products I, II, III, V and VIII, which have been reported in literatures [8, 51]. Desulfonylation is the other important direct pathway induced by excitation of SDMD to its triplet state to produce SO2 extrusion product IV, which has been identified by previous studies [20, 21]. In addition, photooxidation was involved in SDMD photodegradation. Oxidation products where the m/z increased by 16 were identified to N-oxides (VI and X), and by 32 were identified to product VI. O-addition to the phenyl ring or addition to both rings, or the hydroxyl addition through reaction with HO· may result in the oxidation products [18]. The photoproducts may further undergo the desulfonylation process to produce SO2 extrusion products VII and X. The SDMD photolysis pathways are proposed in Fig. 5. Proposed photodegradation pathway of SDMD under UV–Vis irradiation. [SDMD]0 = 10 mg/L, pH = 8 Toxicity variation As V. fischeri luminescent bacteria can demonstrate a great potential in ecotoxicological evaluation in comparison to other bioassays, it has been widely used for assessing the toxic effect in aquatic ecosystem [36, 37]. Figure 6 illustrated the toxicity evolvement of the photodegradation reaction solution under UV–Vis irradiation. Results showed that the initial inhibition percentage of SDMD (0 h of irradiation) to V. fischeri was 38.5%. With the reaction continued, the degradation intermediates showed decreased toxicity to V. fischeri. The inhibition disappeared after 90-min experiment. It should be noted that the inhibition percentage firstly decreased to 22.2% (10 min), then increased to 29.5% (40%) and finally decreased to 21.5% (60 min) and 0 (≥ 90 min). This trend indicated the complex photodegradation pathways of SDMD, with some toxic intermediates produced and further transformed to more toxic compounds. Overall, the whole toxicity was eliminated after long-time photodegradation, and the risk of SDMD in natural environments was reduced when exposed to light irradiation. When utilizing UV light to remove SDMD and other antibiotics, the toxicity variation should be monitored, so that the optimum treatment time with low toxicological risk to ecology and human health could be determined. Inhibition (%) of the luminescence of photobacteria V. fischeri as a function of the irradiation time for SDMD. [SDMD]0 = 10 mg/L, pH = 8 The present work explored the photo-degradation of SDMD in aqueous solution. The SDMD photo-degradation under UV–Vis irradiation was pH dependent, with higher removal efficiencies under alkaline condition than acidic and neutral conditions. The common water constituents exerted different influence on the SDMD photolysis, depending on different reactive oxygen species involved. Results showed that 1O2 was an important radical generated during the photolysis process. A total of eleven reaction intermediates/products were identified, which were less toxic toward V. fischeri, indicating that photodegradation played a positive role in diminishing the ecotoxicological risk of SDMD in natural water. SDMD: HPLC–MS: high-performance liquid chromatography–mass spectrometry PPCPs: pharmaceuticals and personal care products ROS: dissolved organic matter 3NOM*: triplet excited states of natural organic matter ESI: electrospray ionization This work was supported by the National Natural Science Foundation of China (Nos. 41673120 and 51208482). ZH, CG and JX involved in the experiments and manuscript writing, JL, Yan Zhang and Yuan Zhang were responsible for the data analysis and study designing. ZH and JX contributed to the manuscript correction. All authors read and approved the final manuscript. 12302_2019_223_MOESM1_ESM.docx Additional file 1: Figure S1. The absorption spectra of SDMD at different pH. Figure S2. UV–Vis spectrum of the HA. Figure S3. The total ion chromatogram for UV–Vis photodegradation of SDMD. Figure S4. MS spectra of the intermediates detected in the SDMD photodegradation solutions under the simulated light irradiation. Scheme S1. Potential direct photolysis cleavage sites [1]. Table S1. Rate constants (k), half-lives (t1/2) and correlation coefficients (r2) for the photodegradation of the SDMD under irradiation of UV–Vis at different pH. State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012, China State Key Laboratory of Environmental Chemistry and Ecotoxicology, Research Center for Eco-Environmental Sciences, Chinese Academy of Sciences, Beijing, 100085, China Stamm C, Alder AC, Fenner K, Hollender J, Krauss M, McArdell CS, Ort C, Schneider MK (2008) Spatial and temporal patterns of pharmaceuticals in the aquatic environment: a review. Geography Compass 2:920–955View ArticleGoogle Scholar Mompelat S, Le Bot B, Thomas O (2009) Occurrence and fate of pharmaceutical products and by-products, from resource to drinking water. Environ Int 35:803–814View ArticleGoogle Scholar Homem V, Santos L (2011) Degradation and removal methods of antibiotics from aqueous matrices—a review. J Environ Manage 92:2304–2347View ArticleGoogle Scholar García-Galán MJ, Díaz-Cruz MS, Barceló D (2012) Kinetic studies and characterization of photolytic products of sulfamethazine, sulfapyridine and their acetylated metabolites in water under simulated solar irradiation. Water Res 46:711–722View ArticleGoogle Scholar Jia A, Wan Y, Xiao Y, Hu J (2012) Occurrence and fate of quinolone and fluoroquinolone antibiotics in a municipal sewage treatment plant. Water Res 46:387–394View ArticleGoogle Scholar Boreen AL, Arnold WA, McNeill K (2003) Photodegradation of pharmaceuticals in the aquatic environment: a review. Aquat Sci 65:320–341View ArticleGoogle Scholar Kim I, Yamashita N, Tanaka H (2009) Photodegradation of pharmaceuticals and personal care products during UV and UV/H2O2 treatments. Chemosphere 77:518–525View ArticleGoogle Scholar Boreen AL, Arnold WA, McNeill K (2004) Photochemical fate of sulfa drugs in the aquatic environment: sulfa drugs containing five-membered heterocyclic groups. Environ Sci Technol 38:3933–3940View ArticleGoogle Scholar Prados-Joya G, Sánchez-Polo M, Rivera-Utrilla J, Ferro-garcía M (2011) Photodegradation of the antibiotics nitroimidazoles in aqueous solution by ultraviolet radiation. Water Res 45:393–403View ArticleGoogle Scholar Tercero Espinoza LA, Neamţu M, Frimmel FH (2007) The effect of nitrate, Fe(III) and bicarbonate on the degradation of bisphenol A by simulated solar UV-irradiation. Water Res 41:4479–4487View ArticleGoogle Scholar Ge L, Chen J, Qiao X, Lin J, Cai X (2009) Light-source-dependent effects of main water constituents on photodegradation of phenicol antibiotics: mechanism and kinetics. Environ Sci Technol 43:3101–3107View ArticleGoogle Scholar Mao L, Meng C, Zeng C, Ji Y, Yang X, Gao S (2011) The effect of nitrate, bicarbonate and natural organic matter on the degradation of sunscreen agent p-aminobenzoic acid by simulated solar irradiation. Sci Total Environ 409:5376–5381View ArticleGoogle Scholar Brezonik PL, Fulkerson-Brekken J (1998) Nitrate-induced photolysis in natural waters: controls on concentrations of hydroxyl radical photo-intermediates by natural scavenging agents. Environ Sci Technol 32:3004–3010View ArticleGoogle Scholar Vione D, Khanra S, Man SC, Maddigapu PR, Das R, Arsene C, Olariu RI, Maurino V, Minero C (2009) Inhibition vs. enhancement of the nitrate-induced phototransformation of organic substrates by the ·OH scavengers bicarbonate and carbonate. Water Res 43:4718–4728View ArticleGoogle Scholar Ji Y, Zeng C, Ferronato C, Chovelon JM, Yang X (2012) Nitrate-induced photodegradation of atenolol in aqueous solution: kinetics, toxicity and degradation pathways. Chemosphere 88:644–649View ArticleGoogle Scholar Lam MW, Mabury SA (2005) Photodegradation of the pharmaceuticals atorvastatin, carbamazepine, levofloxacin, and sulfamethoxazole in natural waters. Aquat Sci 67:177–188View ArticleGoogle Scholar Fisher JM, Reese JG, Pellechia PJ, Moeller PL, Ferry JL (2006) Role of Fe(III), phosphate, dissolved organic matter, and nitrate during the photodegradation of domoic acid in the marine environment. Environ Sci Technol 40:2200–2205View ArticleGoogle Scholar Guerard JJ, Chin YP, Mash H, Hadad CM (2009) Photochemical fate of sulfadimethoxine in aquaculture waters. Environ Sci Technol 43:8587–8592View ArticleGoogle Scholar Feng W, Nansheng D (2000) Photochemistry of hydrolytic iron (III) species and photoinduced degradation of organic compounds. A minireview. Chemosphere 41:1137–1147View ArticleGoogle Scholar Trovó AG, Nogueira RFP, Agüera A, Sirtori C, Fernández-Alba AR (2009) Photodegradation of sulfamethoxazole in various aqueous media: persistence, toxicity and photoproducts assessment. Chemosphere 77:1292–1298View ArticleGoogle Scholar Boreen AL, Arnold WA, McNeill K (2005) Triplet-sensitized photodegradation of sulfa drugs containing six-membered heterocyclic groups: identification of an SO2 extrusion photoproduct. Environ Sci Technol 39:3630–3638View ArticleGoogle Scholar Accinelli C, Hashim M, Epifani R, Schneider RJ, Vicari A (2006) Effects of the antimicrobial agent sulfamethazine on metolachlor persistence and sorption in soil. Chemosphere 63:1539–1545View ArticleGoogle Scholar Baran W, Sochacka J, Wardas W (2006) Toxicity and biodegradability of sulfonamides and products of their photocatalytic degradation in aqueous solutions. Chemosphere 65:1295–1299View ArticleGoogle Scholar Nasuhoglu D, Yargeau V, Berk D (2011) Photo-removal of sulfamethoxazole (SMX) by photolytic and photocatalytic processes in a batch reactor under UV-C radiation (lambdamax = 254 nm). J Hazard Mater 186:67–75View ArticleGoogle Scholar Xu WH, Zhang G, Zou SC, Li XD, Liu YC (2007) Determination of selected antibiotics in the Victoria Harbour and the Pearl River, South China using high-performance liquid chromatography–electrospray ionization tandem mass spectrometry. Environ Pollut 145:672–679View ArticleGoogle Scholar Kaczala FE, Blum S (2016) The occurrence of veterinary pharmaceuticals in the environment: a review. Curr Anal Chem 12:169–182View ArticleGoogle Scholar Gaw S, Thomas Kevin V, Hutchinson Thomas H (2014) Sources, impacts and trends of pharmaceuticals in the marine and coastal environment. Philosoph Transac R Soc B 369:20130572View ArticleGoogle Scholar Larsbo M, Fenner K, Stoob K, Burkhardt M, Abbaspour K, Stamm C (2008) Simulating sulfadimidine transport in surface runoff and soil at the microplot and field scale. J Environ Q 37:788–797View ArticleGoogle Scholar Carstens KL, Gross AD, Moorman TB, Coats JR (2013) Sorption and photodegradation processes govern distribution and fate of sulfamethazine in freshwater-sediment microcosms. Environ Sci Technol 47:10877–10883View ArticleGoogle Scholar Chen N, Huang Y, Hou X, Ai Z, Zhang L (2017) Photochemistry of hydrochar: reactive oxygen species generation and sulfadimidine degradation. Environ Sci Technol 51:11278–11287View ArticleGoogle Scholar Yang B, Mao X, Pi L, Wu Y, Ding H, Zhang W (2017) Effect of pH on the adsorption and photocatalytic degradation of sulfadimidine in Vis/g-C3N4 progress. Environ Sci Pollut Res 24:8658–8670View ArticleGoogle Scholar Sören T-B, Peters D (2007) Photodegradation of pharmaceutical antibiotics on slurry and soil surfaces. Landbauforschung Volkenrode. 57:13Google Scholar Miller PL, Chin YP (2002) Photoinduced degradation of carbaryl in a wetland surface water. J Agric Food Chem 50:6758–6765View ArticleGoogle Scholar Buxton GV, Greenstock CL, Helman WP, Ross AB (1988) Critical review of rate constants for reactions of hydrated electrons, hydrogen atoms and hydroxyl radicals. J Phys Chem Ref Data 17:513–886View ArticleGoogle Scholar Miolo G, Viola G, Vedaldi D, Dall'Acqua F, Fravolini A, Tabarrini O, Cecchetti V (2002) In vitro phototoxic properties of new 6-desfluoro and 6-fluoro-8-methylquinolones. Toxicol In Vitro 16:683–693View ArticleGoogle Scholar Calza P, Marchisio S, Medana C, Baiocchi C (2010) Fate of antibacterial spiramycin in river waters. Anal Bioanal Chem 396:1539–1550View ArticleGoogle Scholar Parvez S, Venkataraman C, Mukherji S (2006) A review on advantages of implementing luminescence inhibition test (Vibrio fischeri) for acute toxicity prediction of chemicals. Environ Int 32:265–268View ArticleGoogle Scholar Pouliquen H, Delépée R, Larhantec-Verdier M, Morvan ML, Le Bris H (2007) Comparative hydrolysis and photolysis of four antibacterial agents (oxytetracycline oxolinic acid, flumequine and florfenicol) in deionised water, freshwater and seawater under abiotic conditions. Aquaculture 262:23–28View ArticleGoogle Scholar Burkhardt M, Stamm C (2007) Depth distribution of sulfonamide antibiotics in pore water of an undisturbed loamy grassland soil. J Environ Qual 36:588–596View ArticleGoogle Scholar Latch DE, Packer JL, Stender BL, VanOverbeke J, Arnold WA, McNeill K (2005) Aqueous photochemistry of triclosan: formation of 2,4-dichlorophenol, 2,8-dichlorodibenzo-p-dioxin, and oligomerization products. Environ Toxicol Chem 24:517–525View ArticleGoogle Scholar Chen Z, Cao G, Song Q (2010) Photo-polymerization of triclosan in aqueous solution induced by ultraviolet radiation. Environ Chem Lett 8:33–37View ArticleGoogle Scholar Zhou X, Mopper K (1990) Determination of photochemically produced hydroxyl radicals in seawater and freshwater. Mar Chem 30:71–88View ArticleGoogle Scholar Sági G, Csay T, Szabó L, Pátzay G, Csonka E, Takács E, Wojnárovits L (2015) Analytical approaches to the OH radical induced degradation of sulfonamide antibiotics in dilute aqueous solutions. J Pharm Biomed Anal 106:52–60View ArticleGoogle Scholar Huang J, Mabury SA (2000) Steady-state concentrations of carbonate radicals in field waters. Environ Toxicol Chem 19:2181–2218View ArticleGoogle Scholar Mazellier P, Busset C, Delmont A, De Laat J (2007) A comparison of fenuron degradation by hydroxyl and carbonate radicals in aqueous solution. Water Res 41:4585–4594View ArticleGoogle Scholar Hassett JP (2006) Dissolved natural organic matter as a microreactor. Science 311:1723–1724View ArticleGoogle Scholar Vione D, Falletti G, Maurino V, Minero C, Pelizzetti E, Malandrino M, Ajassa R, Olariu RI, Arsene C (2006) Sources and sinks of hydroxyl radicals upon irradiation of natural water samples. Environ Sci Technol 40:3775–3781View ArticleGoogle Scholar Zhan M, Yang X, Xian Q, Kong L (2006) Photosensitized degradation of bisphenol A involving reactive oxygen species in the presence of humic substances. Chemosphere 63:378–386View ArticleGoogle Scholar Chiron S, Minero C, Vione D (2006) Photodegradation processes of the antiepileptic drug carbamazepine, relevant to estuarine waters. Environ Sci Technol 40:5977–5983View ArticleGoogle Scholar Shirayama H, Tohezo Y, Taguchi S (2001) Photodegradation of chlorinated hydrocarbons in the presence and absence of dissolved oxygen in water. Water Res 35:1941–1950View ArticleGoogle Scholar García-Galán MJ, Silvia Díaz-Cruz M, Barceló D (2008) Identification and determination of metabolites and degradation products of sulfonamide antibiotics. Trends Anal Chem 27:1008–1022View ArticleGoogle Scholar Dodd MC, Huang C-H (2004) Transformation of the antibacterial agent sulfamethoxazole in reactions with chlorine: kinetics, mechanisms, and pathways. Environ Sci Technol 38:5607–5615View ArticleGoogle Scholar Motten AG, Chignell CF (1983) Spectroscopic studies of cutaneous photosensitizing agents–III. Spin trapping of photolysis products from sulfanilamide analogs. Photochem Photobiol 37:17–26View ArticleGoogle Scholar
CommonCrawl
Title: Doubly robust confidence sequences for sequential causal inference (Submitted on 11 Mar 2021 (this version), latest version 3 Jan 2023 (v6)) Abstract: This paper derives time-uniform confidence sequences (CS) for causal effects in experimental and observational settings. A confidence sequence for a target parameter $\psi$ is a sequence of confidence intervals $(C_t)_{t=1}^\infty$ such that every one of these intervals simultaneously captures $\psi$ with high probability. Such CSs provide valid statistical inference for $\psi$ at arbitrary stopping times, unlike classical fixed-time confidence intervals which require the sample size to be fixed in advance. Existing methods for constructing CSs focus on the nonasymptotic regime where certain assumptions (such as known bounds on the random variables) are imposed, while doubly-robust estimators of causal effects rely on (asymptotic) semiparametric theory. We use sequential versions of central limit theorem arguments to construct large-sample CSs for causal estimands, with a particular focus on the average treatment effect (ATE) under nonparametric conditions. These CSs allow analysts to update statistical inferences about the ATE in lieu of new data, and experiments can be continuously monitored, stopped, or continued for any data-dependent reason, all while controlling the type-I error rate. Finally, we describe how these CSs readily extend to other causal estimands and estimators, providing a new framework for sequential causal inference in a wide array of problems.
CommonCrawl
Ramanujan's series $1+\sum_{n=1}^{\infty}(8n+1)\left(\frac{1\cdot 5\cdots (4n-3)}{4\cdot 8\cdots (4n)}\right)^{4}$ This is a repost from MSE as I haven't got anything so far there. Ramanujan gave the following series evaluation $$1+9\left(\frac{1}{4}\right)^{4}+17\left(\frac{1\cdot 5}{4\cdot 8}\right)^{4}+25\left(\frac{1\cdot 5\cdot 9}{4\cdot 8\cdot 12}\right)^{4}+\cdots=\dfrac{2\sqrt{2}}{\sqrt{\pi}\Gamma^{2}\left(\dfrac{3}{4}\right)}$$ in his first and famous letter to G H Hardy. The form of the series is similar to his famous series for $1/\pi$ and hence a similar approach might work to establish the above evaluation. Thus if $$f(x) =1+\sum_{n=1}^{\infty}\left(\frac{1\cdot 5\cdots (4n-3)}{4\cdot 8\cdots (4n)}\right)^{4}x^{n}$$ then Ramanujan's series is equal to $f(1)+8f'(1)$. Unfortunately the series for $f(x) $ does not appear to be directly related to elliptic integrals or amenable to Clausen's formula used in the proofs for his series for $1/\pi$. Is there any way to proceed with my approach? Any other approaches based on hypergeometric functions and their transformation are also welcome. Any reference which deals with this and similar series would also be helpful. nt.number-theory sequences-and-series edited Feb 5 '18 at 1:21 Alexey Ustinov $\begingroup$ Your $f(x)$ is $${}_4 F_3\left({{\frac14,\frac14,\frac14,\frac14}\atop{1,1,1}}\middle|x\right)$$; unfortunately, most of the literature on ${}_4 F_3$ functions of unit argument are concerned with the "balanced" or "Saalschützian" cases, of which your function is not. $\endgroup$ – J. M. is not a mathematician Nov 11 '17 at 9:29 There is a constant $C$ such that $$\sum_{n=0}^{\infty} \frac{(\frac14)_n^3(\frac14 - k)_n}{(1)_n^3(1+k)_n} (8n+1) = C \frac{\Gamma(\frac12+k) \Gamma(1+k)}{\Gamma^2(\frac34+k)}$$ Proof: WZ-method + a Carlson's theorem (see this paper). Then, taking $k=1/4$ we see that the only term inside the sum which is not zero, is the term for $n=0$ which is equal to $1$. This allow us to determine $C$, and we get $\, C=2 \sqrt{2}/\pi$. Finally taking $k=0$, we obtain the value of the sum of that Ramanujan series. Jesús GuilleraJesús Guillera $\begingroup$ That's very ingenious to add a parameter $k$. +1 for now. Will wait for sometime before accept. $\endgroup$ – Paramanand Singh Feb 5 '18 at 2:14 Ramanujan's result is a particular case of the Dougall's theorem $${}_5F_4\left(\genfrac{}{}{0pt}{} {\frac{n}{2}+1,n,-x,-y,-z} {\frac{n}{2},x+n+1,y+n+1,z+n+1};1\right )=\frac{\Gamma(x+n+1)\Gamma(y+n+1)\Gamma(z+n+1)\Gamma(x+y+z+n+1)}{\Gamma(n+1)\Gamma(x+y+n+1)\Gamma(y+z+n+1)\Gamma(x+z+n+1)},$$ with $x=y=z=-n=-\frac{1}{4}$. See page 24 in the book B.C. Berndt, Ramanujan's Notebooks, Part II: http://www.springer.com/in/book/9780387967943 A variant of the Dougall's identity can be used to get many Ramanujan type series for $1/\pi$, see https://www.sciencedirect.com/science/article/pii/S0022247X1101184X (A summation formula and Ramanujan type series, by Zhi-Guo Liu). Zurab SilagadzeZurab Silagadze $\begingroup$ The Dougall's general formula is very nice. In arxiv.org/pdf/1611.04385.pdf I use the WZ-method to accelerate it. See also the references in the paper. $\endgroup$ – Jesús Guillera Feb 6 '18 at 13:07 $\begingroup$ I think this might be how Ramanujan got his formula. Ramanujan possessed many general formulas but always gave specific results. +1 $\endgroup$ – Paramanand Singh Feb 6 '18 at 13:11 $\begingroup$ Yes Ramanujan possessed many general formulas as he independently rediscovered practically all of the major classical theorems on hypergeometric functions. WZ-method is, of course, very nice, something like a magic. I wonder how Ramanujan missed it. There is a beautiful book about Ramanujan "My Search for Ramanujan" by Ken Ono and Amir Aczel. As indicated in the book, "When asked how he obtained his results, Ramanujan would reply that his family goddess, Namagiri, sent him visions in which mathematical formulas would unfold before his eyes". $\endgroup$ – Zurab Silagadze Feb 6 '18 at 13:49 Not the answer you're looking for? Browse other questions tagged nt.number-theory sequences-and-series or ask your own question. Ramanujan's series for $(1/\pi)$ and modular equation of degree $29$ A sum by Ramanujan for $\coth^{2}(5\pi)$ Why do Pell equations appear in Ramanujan's pi formulas? New series for $1/\pi$ based on Ramanujan's ideas Does $\prod_{n=2}^{\infty} \left(\frac {1}{1-\frac{\chi_k(n)}{n^s}} \right)$ converge for non-principal characters for all $\Re(s) > \frac12$? Ramanujan Pi formula and idoneal numbers Is there any closed form expression for $\sum_{k=2}^\infty(-1)^k \left(- \frac{1}{2}\right)^{\frac{k(k+1)}{2}}$? How to prove the identity $L(2,(\frac{\cdot}3))=\frac2{15}\sum\limits_{k=1}^\infty\frac{48^k}{k(2k-1)\binom{4k}{2k}\binom{2k}k}$?
CommonCrawl
Advances in Bridge Engineering The effects of the duration, intensity and magnitude of far-fault earthquakes on the seismic response of RC bridges retrofitted with seismic bearings original innovation Saman Mansouri1, Denise-Penelope N. Kontoni ORCID: orcid.org/0000-0003-4844-10942,3 & Majid Pouraminian ORCID: orcid.org/0000-0001-5648-83654 Advances in Bridge Engineering volume 3, Article number: 19 (2022) Cite this article This paper investigates the effects of earthquakes' duration, intensity, and magnitude on the seismic response of reinforced concrete (RC) bridges retrofitted with seismic bearings, such as elastomeric bearings (EB), lead rubber bearings (LRB), and friction pendulum bearings (FPB). In order to investigate the effects of the seismic isolation, the condition of the deck with a rigid connection on the cap beams and abutments (i.e., without isolation) was investigated as the first model. The EB, LRB and FPB bearings are used between the superstructure and substructure of the studied bridge in the second, third and fourth models, respectively. First, the effects of using seismic bearings on the seismic retrofit of an RC bridge under the Tabas earthquake were investigated. The results of the nonlinear dynamic analysis showed that the use of seismic bearings leads to seismic retrofit of the studied bridge, and FPB and LRB had the best results among the studied isolation equipment, respectively. The same models were also studied subjected to the Landers and Loma Prieta earthquakes. The magnitude of the Landers and Tabas earthquakes is equal to 7.3 Richter, and the magnitude of the Loma Prieta earthquake is equal to 6.7 Richter. However, the duration and intensity of the Landers and Loma Prieta earthquakes are much larger than the Tabas earthquake. The Landers and Loma Prieta earthquakes caused instability in the isolated models due to their significant duration and intensity. This issue shows that using seismic bearings is very useful and practical for seismic retrofitting bridges subjected to far-fault earthquakes. According to most seismic codes, selecting earthquakes in far-region of faults is based on just magnitude criterion. However, this study indicates that there are two main factors in the features of far-fault earthquakes, including duration and intensity. Ignoring these factors in selecting earthquakes may lead to the instability of structures. Considering earthquakes' duration, intensity, and magnitude are vital for selecting earthquakes in the far region of the fault. The most widely used method for transportation and travel around the world is to use the road transportation network. Bridges are one of the most critical parts of the road and rail transport network. Because of this reason, their stability against various natural disasters such as earthquakes is always discussed and researched. Most bridges were built after World War II. These structures need seismic retrofit for striking reasons like structural life and improving seismic codes. One of the most effective solutions to seismic retrofit of bridges is using energy dissipation equipment. Extensive studies suggest that using seismic bearings and dampers leads to seismic retrofit of buildings and bridges (Xiang and Li 2016; Mansouri and Nazari 2017; Zhou and Tan 2018; Luo et al 2019; Kontoni and Farghaly 2019). Recently, studies on the effects of using seismic bearings and dampers indicated that these devices could reduce the seismic response of structures (Cho et al 2020; Hassan and Billah 2020; Cao et al 2020; Zheng et al 2020; Wei et al 2021; Xing et al 2021; Ristic et al 2021; Khedmatgozar Dolati et al 2021; Yuan et al 2021; Chen and Xiong 2022; Marnani et al 2022; Guo et al 2022). In addition, Ma et al (2021) examined the dynamic response of the story-adding structure with an isolation technique subjected to near-fault ground motions. The results showed that using seismic isolation in both base-isolated and story-isolated structures led to reducing in the seismic response of structures. In the paper of Mansouri (2021a), the strategies for seismic retrofit of a bridge were discussed. The results indicated that the displacement of the deck, cap beams, and abutment was equal in the integrated bridge, and its value was close to zero. This seismic behavior considerably increased base shear in integrated bridges compared to isolated ones. While using seismic bearings leads to the deck slides on seismic bearings exposed to the seismic loads. This seismic behavior increases the absorption and dissipation of energy in the isolated structure rather than the integrated structure. Moreover, the study by An et al (2020) assessed the response of bridges with various aspect ratios subjected to near and far-fault earthquakes. This study showed that as the aspect ratio of the pier increased, the tendency to increase or decrease the maximum displacement of the bridge was similar to that of the displacement spectrum. On the other hand, the rate of increase or decrease in the displacement response was more significant than that of the response spectrum. Some differences between the maximum displacement and displacement spectrum could be attributed to the effects of the aspect ratio of the bridge. Also, Fu et al (2022) investigated the temperature-dependent performance of isolated bridges using lead rubber bearings subjected to near-fault earthquakes. The results showed that the absorbed energy and shearing strain of LRBs increased. Because of these reasons, if the effect of lead core heating is not considered, the seismic demand of LRB would be underestimated, and lead core heating would have a negligible impact on the total input energy. To mitigate the harmful effect of the vibration generated from each earthquake, four mitigation schemes were used and compared with the non-mitigation model to determine the effectiveness of each scheme, when applying on the SSI or fixed CSB models (Kontoni and Farghaly 2019). According to the distance between the earthquake recording station and the fault, earthquakes are divided into two groups, including near and far fault earthquakes. Each group of earthquakes has specific characteristics (Mansouri 2017; Mansouri 2021b). Some earthquakes in the far region of the fault have significant duration and intensity effects. Examining the effects of far-fault earthquakes, which have significant duration and intensity effects, is very important in the seismic retrofit of bridges. Ignoring these phenomena in evaluating and designing bridges may lead to significant damage. So far, no proper study on seismic retrofit of bridges using energy dissipation equipment under far-fault earthquakes with significant duration and intensity has been investigated. Therefore, this paper evaluates the effects of using seismic bearings in the seismic retrofit of bridges against far-fault earthquakes with significant duration and intensity effects. 2 The studied existing RC bridge The studied bridge is located at the intersection of the Dogaz Highway with the Tehran-Karaj Freeway (Mansouri 2021a). This bridge has six spans. Figure 1 shows the plan view of the bridge and the deck, cap beam, and deck beam cross-sections. The width of the deck was 17 m. The lateral and middle spans' length was 12.6 m and 18.5 m, respectively. The plan view and cross-sections of the existing RC bridge. a Plan view (units in m). b Cross-section of the deck (units in m). c Cross-section of the bent cap beam (units in cm). d Cross-section of the deck beam (units in cm) 3 The studied bridge models This bridge uses conventional neoprene between the substructure and the superstructure. These bearings do not have a high capacity for energy consumption caused by earthquakes. Because of this reason, strategies for seismic retrofit of the studied bridge are investigated. In the first model, the deck is on the substructure with a rigid connection. In the second, third, and fourth models, EB (in the second model), LRB (in the third model) and FPB (in the fourth model) are used between the deck and the substructure, respectively. Figure 2 shows a three-dimensional view and a view of the plan of the studied bridge. The CSI Bridge® 2022 software was used to model the bridge, and nonlinear time history analysis was used to study it. For this purpose, the nonlinear behavior model has been used to model the behavior of materials and seismic bearings. The studied model of the RC bridge. a A three-dimensional view of the studied model. b A view of the plan of the studied model 4 Energy dissipation equipment In this study, the effects of using elastomeric bearing (EB), lead rubber bearing (LRB), and friction pendulum bearing (FPB) on the seismic response of bridges are investigated. An elastomeric bearing (EB) is a type of seismic bearing that consists of a lot of steel and rubber layers; this device includes a rubber bearing that is reinforced with steel sheets (E.g., Jabbareh Asl et al 2014). Using the following information (Akogul and Celik 2008), EB can be modeled as: $${K}_H={k}_{eff}=\frac{G_{eff}\ A}{H_r}=\frac{680\times 0.1575}{0.061}=1755\ kN/m$$ $${K}_V=\frac{E_C\ A}{H}=\frac{617263\times 0.1575}{0.085}=1143752\ kN/m$$ $${K}_{\theta }=\frac{E\ I}{H_r}=\frac{617263\times 0.0016}{0.061}=16270\ kN/m$$ In order to eliminate the weakness of EB in reducing seismic forces in the structure, a lead core was used in the center (see, e.g., Mansouri et al 2017). A lead rubber bearing (LRB) can be modeled with the following specifications (Torunbalci and Ozpalanlar 2008): Link element = Rubber Isolator. U1 → linear Effective Stiffness = 1,500,000 kN/m. U2 = U3 → linear Effective Stiffness = 800 kN/m. U2 = U3 → Nonlinear Stiffness = 2500 kN/m. U2 = U3 → Yield Strength = 80. U2 = U3 → Post Yield Stiffness Ratio = 0.1. A friction pendulum bearing (FPB) is one of the seismic bearings used to reduce the seismic response of bridges (Hong et al 2020). Using FPB increases the period of isolated bridges and their protection against the strongest earthquakes. FPB can be modeled with the following specifications (Torunbalci and Ozpalanlar 2008): Link element = Friction isolator. U1 → linear Effective Stiffness = 15,000,000 kN/m. U1 → Nonlinear Effective Stiffness = 15,000,000 kN/m. U2 = U3 → Nonlinear Stiffness = 15,000 kN/m. U2 = U3 → Friction Coefficient Slow = 0.03. U2 = U3 → Rate Parameter = 40. U2 = U3 → Radius of Sliding Surface = 2.23. The features of these seismic bearings (EB, LRB, and FPB), according to the Iranian code no. 523 (2010) are designed and verified. 5 Eigenvalue analysis The stiffness of the isolated bridge is less than the integrated bridge. Because of this, the isolated bridges' period is higher than the integrated bridge. According to Fig. 3, the period of the first model is equal to 0.14 seconds (integrated bridge), and the periods of the second to fourth models (isolated bridge) are equal to 0.96, 1.39, and 1.43 seconds, respectively. Using seismic bearings in bridges leads to a considerable increase in the period of bridges compared to integrated bridges. The period of the studied models 6 Selected earthquakes According to an interpretation study of the Iranian seismic code (Mansouri 2017), seven earthquakes have been selected with a magnitude between 6.5 and 7.5, including the Chi-Chi, Landers, Loma Prieta, Northridge, Parkfield, San Fernando, and Tabas earthquakes. These earthquakes are located between 20 and 60 km from the fault. The characteristics of these earthquakes were taken from the Pacific Earthquake Engineering Research Center (PEER) 2022 website (https://peer.berkeley.edu/peer-strong-ground-motion-databases), and the SeismoSignal 2022 software was used to edit and extract information. The records of the horizontal components of each earthquake are combined using the SRSS (Square Root of the Sum of the Squares) method to obtain only one spectrum for each earthquake. The seven spectra calculated for the discussed seven earthquakes are added together, and their average is calculated to obtain the average spectrum of earthquakes. According to the Iranian seismic code, the scale factor is calculated for each earthquake. The response spectrums for all seven earthquakes are shown in Figs. 4 and 5 compares the response spectrum obtained from the Iranian seismic code with the response spectrum obtained from the average response spectrum of the seven earthquakes. The response spectrums of selected seven earthquakes The comparison of the response spectrum of Iranian Standard No. 2800 with the response spectrum obtained from the average response spectrum of the selected seven earthquakes Figures 6, 7, 8, 9, 10 and 11 show the horizontal components of acceleration-time of the Tabas, Landers, and Loma Prieta earthquakes. The magnitude of both the Tabas and Landers earthquakes is equal to 7.3 Richter. Due to this reason, the effect of these far-fault earthquakes on the seismic response of the bridge is investigated. The magnitude of the Loma Prieta earthquake is equal to 6.7 Richter, but the duration and intensity of the Landers and Loma Prieta earthquakes are much greater than the Tabas earthquake. Because of this reason, to investigate the effect of the magnitude of far-fault earthquakes, the effect of the Loma Prieta earthquake on the bridge is investigated and compared to the seismic response of this bridge under the Tabas earthquake and the Landers earthquake. In the following, the models' response to these earthquakes is examined. Usually, according to Fig. 6, records of far-fault earthquakes are divided into three parts. The ground motions of the first and third parts are negligible and do not have strong effects on the seismic response of the structure. However, strong ground motions in the second part strongly affect the seismic response of structures. Especially there are two vital factors in the second part, including duration (related to the horizontal axis of the record) and intensity (related to the vertical axis of the record). The record of the horizontal component (STG000) of the Loma Prieta earthquake The other record of the horizontal component (STG090) of the Loma Prieta earthquake The record of the horizontal component (DAY-LN) of the Tabas earthquake The other record of the horizontal component (DAY-TR) of the Tabas earthquake The record of the horizontal component (LCN260) of the Landers earthquake The other record of the horizontal component (LCN345) of the Landers earthquake Most seismic codes consider features of magnitude and or PGA (Peak Ground Acceleration) of earthquakes for selecting earthquake to seismic design of structures. The PGA is related to a moment, and AGA (Average Ground Acceleration) in the second part of the record is very effective on the seismic response of a structure (Mansouri 2021b). Usually, there are some stations to record an earthquake. Features of earthquakes like the acceleration-time record, velocity-time record, and displacement-time record are different in every station compared to others. Due to this reason, the intensity and duration of an earthquake in different stations are not equal to each other. However, magnitude measures the size of the earthquake at its source. Every earthquake has one magnitude and does not depend on stations. The effect of the magnitude as a vital criterion for selecting earthquakes (according to seismic codes) on the seismic response of the structure will be investigated in this paper. 7 Nonlinear time history analysis The seismic response of the studied models exposed to the Tabas earthquake, the Loma Prieta earthquake, and the Landers earthquake is studied using time-history analysis. According to Fig. 12, the point "Deck" is in the middle of the deck, and the point "Cap beam" is in the middle of the cap beam 3. The unit system of all graphs is in Ton-cm. A view of studied point 7.1 Results of lateral displacement Figures 13, 14, 15, 16, 17, 18, 19 and 20 show the results of the horizontal displacements of the studied points subjected to the Tabas earthquake. The lateral displacement of points Deck and Cap beam in longitudinal direction (X) in model 1 under the Tabas earthquake The lateral displacement of points Deck and Cap beam in the transverse direction (Y) in model 1 under the Tabas earthquake The lateral displacement of points Deck and Cap beam in the longitudinal direction (X) in model 2 under the Tabas earthquake The lateral displacement of points Deck and Cap beam in the transverse direction (Y) in model 2 under Tabas earthquake According to Figs. 13, 14, 15, 16, 17, 18, 19 and 20, in the integrated bridge, the horizontal displacement of the studied points on the deck and cap beams is equal to each other, and its value is negligible. However, in the isolated bridge, the deck slips on seismic bearings so that the displacement of the substructure is minimal, and the deck dissipates the energy caused by the earthquake with horizontal movements. Table 1 shows the results of the maximum lateral displacement of the deck for isolated bridges exposed to the Tabas earthquake, the Loma Prieta earthquake, and the Landers earthquake. In the second to fourth models, the maximum horizontal displacement of the cap beams was subjected to mentioned earthquakes. The maximum displacement of the deck in the lateral direction for the second model is 36 cm; for the third model is equal to 25 cm, and for the fourth model is equal to 295 cm, respectively. Comparing the results of the maximum lateral displacements with the allowed displacement rates for these seismic bearings, it is clear that the isolated models are unstable under the Landers earthquake and the Loma Prieta earthquake. The deck of isolated bridges may fall from substructures subjected to the Landers earthquake and the Loma Prieta earthquake because the maximum lateral displacements of the deck for isolated bridges under the Landers earthquake and Loma Prieta earthquake are higher than the allowed displacements of seismic bearings. Time history analysis does not show structural collapse levels. Table 1 The maximum lateral displacement of the deck in isolated bridges 7.2 Results of input and kinetic energies Due to an earthquake, energies are applied to structures. Some parts of the input energy are transformed into kinetic energy. Due to this reason, the input and kinetic energies of the second model are investigated in this study. The energy should be either absorbed or dissipated. This issue is made clear by considering the conservation of energy relationship (1), where E is the absolute energy input from the earthquake motion, Ek is the absolute kinetic energy, Es is the recoverable elastic strain energy, and Eh is the irrecoverable energy dissipated by the structural system through inelastic or other forms of action. Ed is the energy dissipated by supplemental damping devices (Constantinou and Symans 1993). $$\textrm{E}=\textrm{Ek}+\textrm{Es}+\textrm{Eh}+\textrm{Ed}$$ The input and kinetic energy rates for the studied models subjected to the Tabas earthquake are shown in Fig. 21. The isolated bridge's stiffness is lower than the integrated bridge. Due to this reason, the isolated bridge absorbs more energy than the integrated bridge. Input energy and kinetic energy results for the studied models under the Tabas earthquake 7.3 Results of base shear The results of base shear for integrated and isolated bridges under the Tabas earthquake are shown in Fig. 22. The base shear for the first model in the directions x and y is equal to 1081 and 1505 tons, respectively. In the second model, by using elastomeric bearing, the base shear of the bridge is reduced and got to 825 and 853.6 tons, respectively. In the third model, by using LRB in the bridge, the base shear of the structure is reduced and reaches 563.7 and 405.3 tons, respectively. The base shear in the fourth model, which uses FPB in the bridge, in the directions x and y is equal to 541.1 and 370.2, respectively. The results of base shear of the bridge in longitudinal (X) and transversal (Y) directions under the influence of the Tabas earthquake The results showed that using seismic bearings in bridges reduces structures' seismic response compared to the integrated bridge. This result is vital because the bridge's seismic capacity of structural elements is limited. The magnitude of the Landers and Tabas earthquakes was 7.3 Richter. However, the duration and intensity of the Landers earthquake are higher than the Tabas earthquake. The magnitude of the Loma Prieta earthquake is 6.7 Richter. However, the duration and intensity of the Loma Prieta earthquake are higher than the Tabas earthquake. This study shows that duration and intensity are two vital factors in the seismic response of structures. Because of these reasons, magnitude is an insufficient criterion for selecting earthquakes. Considering magnitude, duration, and intensity are suitable criteria for selecting earthquakes for seismic retrofit and seismic design of structures. According to Figs. 23 and 24, the records of transitional components of the Landers earthquake, Loma Prieta earthquake, and Tabas earthquake are investigated and compared, and the PGA, duration, and intensity of strong ground motion are evaluated. The duration of strong ground motion in the Landers earthquake and Loma Prieta earthquake are much larger than the same value in the Tabas earthquake. Besides, the duration and intensity of the Landers earthquake are much larger than the Loma Prieta earthquake. The comparison of the first horizontal record of the Tabas earthquake, Landers earthquake, and Loma Prieta earthquake The comparison of the other horizontal record of the Tabas earthquake, Landers earthquake, and Loma Prieta earthquake Investigating the records of the Tabas earthquake indicates that the PGA of the components of horizontal orthogonal is 0.328 g (component DAY-LN) and 0.406 g (component DAY-TR), respectively. Also, the examination of the Landers earthquake records shows that the PGA of components of horizontal orthogonal is 0.727 g (component LCN260) and 0.789 g (component LCN345), respectively. In addition, for the Loma Prieta earthquake, the PGA of the components of horizontal orthogonal is 0.512 g (component STG000) and 0.323 g (component STG090), respectively. The isolated bridges are unstable under the Landers earthquake and the Loma Prieta earthquake. Because the PGA, duration, and intensity of the Landers earthquake and Loma Prieta earthquake records are much larger than similar values in the Tabas earthquake. Due to this reason, the deck in isolated bridges has lateral displacements higher than allowed displacements. On the one hand, seismic bearings are very useful and practical for the seismic retrofit of bridges. On the other hand, there are vital features in far-fault earthquakes that are ignored in selecting earthquakes to design and seismic retrofit structures in seismic codes, such as duration and intensity. In this paper, the effects of magnitude scale, duration, and intensity were investigated on the seismic response of retrofitted RC bridges, and the following results were obtained: Using seismic bearings can lead to seismic retrofit of bridges and reduce the seismic response of structures. The results showed that to use these seismic bearings, the site conditions should be checked regarding the amount of seismic risk. If there is no proper estimation of the seismic risk of the studied site, the use of this equipment may lead to the instability of the structures against strong earthquakes, and the bridge deck will fall under the effect of strong ground motions. One of the essential steps in the design and seismic retrofit of structures is the selection of earthquakes based on the actual conditions of the structure and site. According to most seismic regulations, the selection of earthquakes for the design and seismic retrofit of structures is based on magnitude scale and PGA. In this paper, this issue was investigated. The results indicate that the magnitude scale and PGA are insufficient to select far-fault earthquakes to design seismic retrofit structures. Simultaneously, the magnitude, duration, and intensity parameters should be considered. The uncertainty of earthquake characteristics and paying attention to economic conditions showed that considering the duration and intensity ​​limits according to the structure's importance and the site's characteristics is vital. Ignoring the duration and intensity parameters in selecting far-fault earthquakes can lead to the instability of isolated bridges against earthquakes with significant duration and intensity rates. These issues show that the seismic regulations need a fundamental revision in terms of the selection parameters of earthquakes. The magnitude of the Landers earthquake and Tabas earthquake is the same (7.3 Richter), and the magnitude of the Loma Prieta earthquake is 6.7 Richter. However, isolated bridges are unstable under the Landers earthquake and the Loma Prieta earthquake, and these bridges are stable under the Tabas earthquake. By comparing the characteristics of these earthquakes, it is clear that the duration and intensity of the three earthquakes are very different. The duration and intensity of the Landers earthquake and Loma Prieta earthquake are higher than the Tabas earthquake, and therefore the Landers earthquake and Loma Prieta earthquake lead to instability of the isolated bridges. This issue indicates that the magnitude parameter as the only criterion for selecting earthquakes is unsuitable for seismic design or seismic retrofit of structures. In far areas of faults, it is crucial to consider the duration and intensity of earthquake records. In addition, the mere use of seismic bearings on structures does not cause seismic retrofit of structures. It is essential to study the different characteristics of the reviewed sites, the seismic risk rate, and the studied structures and mechanical equipment features. All data generated or analyzed during this study are included in this published article. Akogul C, Celik OC (2008) Effect of elastomeric bearing modeling parameters on the seismic design of RC highway bridges with precast concrete girders. The 14th world conference on earthquake engineering, Beijing An H, Lee JH, Shin S (2020) Dynamic response evaluation of bridges considering aspect ratio of pier in near-fault and far-fault ground motions. Appl Sci 10. https://doi.org/10.3390/app10176098 Cao S, Ozbulut OE, Wu S, Sun Z, Deng J (2020) Multi-level SMA/lead rubber bearing isolation system for seismic protection of bridges. Smart Mater Struct 29(5):055045. https://doi.org/10.1088/1361-665X/ab802b Chen X, Xiong J (2022) Seismic resilient design with base isolation device using friction pendulum bearing and viscous damper. Soil Dyn Earthq Eng 153:107073. https://doi.org/10.1016/j.soildyn.2021.107073 Cho CB, Kim YJ, Chin WJ, Lee JY (2020) Comparing Rubber Bearings and Eradi-Quake System for Seismic Isolation of Bridges. Materials 13(22):5247. https://doi.org/10.3390/ma13225247 Constantinou MC, Symans MD (1993) Seismic response of structures with supplemental damping. Struct Des Tall Build 2:77–99 CSI Bridge® (2022) Bridge Analysis, Design and Rating. Computers and Structures, Inc., Walnut Creek and New York https://www.csiamerica.com/products/csibridge Fu JY, Cheng H, Wang DS, Zhang R, Xu TT (2022) Temperature-dependent performance of LRBs and its effect on seismic responses of isolated bridges under near-fault earthquake excitations. Structures 41:619–628. https://doi.org/10.1016/j.istruc.2022.05.046 Guo W, Li J, Guan Z, Chen X (2022) Pounding performance between a seismic-isolated long-span girder bridge and its approaches. Eng Struct 262:114397. https://doi.org/10.1016/j.engstruct.2022.114397 Hassan AL, Billah AM (2020) Influence of ground motion duration and isolation bearings on the seismic response of base-isolated bridges. Eng Struct 222:111129. https://doi.org/10.1016/j.engstruct.2020.111129 Hong X, Guo W, Wang Z (2020) Seismic analysis of coupled high-speed train-bridge with the isolation of friction pendulum bearing. Adv Civil Eng. https://doi.org/10.1155/2020/8714174 Iranian code No. 523 (2010) Guideline for Design and Practice of Base Isolation Systems in Buildings. Office of Deputy for Strategic Supervision Bureau of Technical Execution System Jabbareh Asl M, Rahman M, Karbakhsh A (2014) Numerical Analysis of Seismic Elastomeric Isolation Bearing in the Base-Isolated Buildings. Open J Earthquake Res 3. https://doi.org/10.4236/ojer.2014.31001 Khedmatgozar Dolati SS, Mehrabi A, Khedmatgozar Dolati SS (2021) Application of Viscous Damper and Laminated Rubber Bearing Pads for Bridges in Seismic Regions. Metals 11(11):1666. https://doi.org/10.3390/met11111666 Kontoni D-PN, Farghaly AA (2019) Mitigation of the seismic response of a cable-stayed bridge with soil-structure-interaction effect using tuned mass dampers. Struct Eng Mech 69(6):699–712. https://doi.org/10.12989/sem.2019.69.6.699 Luo Z, Yan W, Xu W, Zheng Q, Wang B (2019) Experimental research on the multilayer compartmental particle damper and its application methods on long-period bridge structures. Front Struct Civ Eng 13:751–766. https://doi.org/10.1007/s11709-018-0509-z Ma XT, Bao C, Doh SI, Lu H, Zhang LX, Ma ZW, He YT (2021) Dynamic response analysis of story-adding structure with isolation technique subjected to near-fault pulse-like ground motions. J Phys Chem Earth 121. https://doi.org/10.1016/j.pce.2020.102957 Mansouri S (2017) Interpretation of "Iranian code of practice for seismic resistant design of building" (Standard No. 2800, 4th edition). Published by Simaye Danesh, Tehran Mansouri S (2021a) The investigation of the effect of using energy dissipation equipment in seismic retrofitting an exist highway RC bridge subjected to far-fault earthquakes. Int J Bridge Eng (IJBE) 9(3):51–84 Mansouri S (2021b) The presentation of a flowchart to select near and far-fault earthquakes for seismic design of bridges and buildings based on defensible engineering judgment. Int J Bridge Eng (IJBE) 9(1):35–48 Mansouri S, Nazari A (2017) The effects of using different seismic bearing on the behavior and seismic response of high-rise building. Civil Eng J 3(3):160–171. https://doi.org/10.28991/cej-2017-00000082 Mansouri I, Ghodrati Amiri GR, Hu JW, Khoshkalam MR, Soori S, Shahbazi S (2017) Seismic fragility estimates of LRB base isolated frames using performance-based design. Shock Vib. https://doi.org/10.1155/2017/5184790 Marnani AB, SAR Z, Rahgozar MA, Behravan A (2022) Study on Modern Energy Absorption Systems in Structures Subjected to Earthquake Forces. Iranian J Sci Technol Transact Civ Eng 46:1–13. https://doi.org/10.1007/s40996-021-00677-w Pacific Earthquake Engineering Research Center (PEER) (2022) PEER Strong Ground Motion Databases, Pacific Earthquake Engineering Research Center. University of California, Berkeley Available online: https://peer.berkeley.edu/peer-strong-ground-motion-databases Ristic J, Brujic Z, Ristic D, Folic R, Boskovic M (2021) Upgrading of isolated bridges with space-bar energy-dissipation devices: Shaking table test. Adv Struct Eng 24(13):2948–2965. https://doi.org/10.1177/13694332211013918 SeismoSignal (2022) Earthquake Software for Signal Processing of Strong-Motion data. Seismosoft Ltd., Pavia https://seismosoft.com/products/seismosignal/ Torunbalci N, Ozpalanlar G (2008) Earthquake response analysis of mid-story buildings isolated with various seismic isolation techniques. The 14th World Conference on Earthquake Engineering, Beijing Wei B, Fu Y, Li S, Jiang L (2021) Influence on the seismic isolation performance of friction pendulum system when XY shear keys are sheared asynchronously. Structures 33:1908–1922. https://doi.org/10.1016/j.istruc.2021.05.064 Xiang N, Li J (2016) Seismic performance of highway bridges with different transverse unseating-prevention devices. J Bridg Eng 21(9). https://doi.org/10.1061/(ASCE)BE.1943-5592.0000909 Xing XK, Lin L, Qin H (2021) An efficient cable-type energy dissipation device for prevention of unseating of bridge spans. Structures 32:2088–2102. https://doi.org/10.1016/j.istruc.2021.03.116 Yuan Y, Wei W, Ni Z (2021) Analytical and experimental studies on an innovative steel damper reinforced polyurethane bearing for seismic isolation applications. Eng Struct 239:112254. https://doi.org/10.1016/j.engstruct.2021.112254 Zheng W, Wang H, Hao H, Bi K, Shen H (2020) Performance of bridges isolated with sliding-lead rubber bearings subjected to near-fault earthquakes. Int J Struct Stab Dyn 20(02):2050023. https://doi.org/10.1142/S0219455420500236 Article MathSciNet Google Scholar Zhou F, Tan P (2018) Recent progress and application on seismic isolation energy dissipation and control for structures in China. Earthq Eng Eng Vib 17:19–27. https://doi.org/10.1007/s11803-018-0422-4 The authors declare that they have no conflict of interest. Department of Civil Engineering, Dezfoul Branch, Islamic Azad University, Dezfoul, Iran Saman Mansouri Department of Civil Engineering, School of Engineering, University of the Peloponnese, GR-26334, Patras, Greece Denise-Penelope N. Kontoni School of Science and Technology, Hellenic Open University, GR-26335, Patras, Greece Department of Civil Engineering, Ramsar Branch, Islamic Azad University, Ramsar, Iran Majid Pouraminian SM: Conceptualization, Methodology, Software, Validation, Formal Analysis, Investigation, Data Curation, Writing—Original Draft Preparation, Visualization. DPNK: Conceptualization, Methodology, Validation, Investigation, Writing—Original Draft Preparation, Visualization, Writing—Review and Editing, Supervision, Project Administration. MP: Conceptualization, Methodology, Investigation, Writing—Original Draft Preparation, Writing—Review and Editing. All authors read and approved the final manuscript. Correspondence to Denise-Penelope N. Kontoni. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Mansouri, S., Kontoni, DP.N. & Pouraminian, M. The effects of the duration, intensity and magnitude of far-fault earthquakes on the seismic response of RC bridges retrofitted with seismic bearings. ABEN 3, 19 (2022). https://doi.org/10.1186/s43251-022-00069-8 Far-fault earthquake Seismic retrofitting RC bridge Seismic bearings
CommonCrawl
Quasi-toric differential inclusions Optimal control of an avian influenza model with multiple time delays in state and control variables Local structure-preserving algorithms for the molecular beam epitaxy model with slope selection Lin Lu 1, , Qi Wang 2, , Yongzhong Song 1, and Yushun Wang 1,, Jiangsu Key Laboratory for NSLSCS, School of Mathematical Sciences, Nanjing Normal University, Nanjing 210023, China Department of Mathematics, University of South Carolina, Columbia, SC 29208, USA * Corresponding author: Yushun Wang Received March 2020 Revised August 2020 Published October 2020 Fund Project: The first author is supported by NSFC grant 11771213, 61872422 Based on the local energy dissipation property of the molecular beam epitaxy (MBE) model with slope selection, we develop three, second order fully discrete, local energy dissipation rate preserving (LEDP) algorithms for the model using finite difference methods. For periodic boundary conditions, we show that these algorithms are global energy dissipation rate preserving (GEDP). For adiabatic, physical boundary conditions, we construct two GEDP algorithms from the three LEDP ones with consistently discretized physical boundary conditions. In addition, we show that all the algorithms preserve the total mass at the discrete level as well. Mesh refinement tests are conducted to confirm the convergence rates of the algorithms and two benchmark examples are presented to show the accuracy and performance of the methods. Keywords: Molecular beam epitaxy model, local energy dissipation rate preserving, global energy dissipation rate preserving, mass conservation, structure-preserving, finite difference methods. Mathematics Subject Classification: Primary: 65M06; Secondary: 80M20. Citation: Lin Lu, Qi Wang, Yongzhong Song, Yushun Wang. Local structure-preserving algorithms for the molecular beam epitaxy model with slope selection. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2020311 T. J. Bridges, Multi-symplectic structures and wave propagation, Math. Proc. Cambridge Philos. Soc., 121 (1997), 147-190. doi: 10.1017/S0305004196001429. Google Scholar T. J. Bridges and S. Reich, Numerical methods for Hamiltonian PDEs, J. Phys. A, 39 (2006), 5287-5320. doi: 10.1088/0305-4470/39/19/S02. Google Scholar L. Brugnano, F. Iavernaro and D. Trigiante, Hamiltonian boundary value methods (energy preserving discrete line integral methods), JNAIAM. J. Numer. Anal. Ind. Appl. Math., 5 (2010), 17-37. Google Scholar L. Brugnano and Y. Sun, Multiple invariants conserving Runge-Kutta type methods for Hamiltonian problems, Numer. Algorithms, 65 (2014), 611-632. doi: 10.1007/s11075-013-9769-9. Google Scholar J. Cai, J. Hong, Y. Wang and Y. Gong, Two energy-conserved splitting methods for three-dimensional time-domain Maxwell's equations and the convergence analysis, SIAM J. Numer. Anal., 53 (2015), 1918-1940. doi: 10.1137/140971609. Google Scholar J. Cai and J. Shen, Two classes of linearly implicit local energy-preserving approach for general multi-symplectic Hamiltonian PDEs, J. Comput. Phys., 401 (2020), 108975, 17 pp. doi: 10.1016/j.jcp.2019.108975. Google Scholar J. Cai, Y. Wang and H. Liang, Local energy-preserving and momentum-preserving algorithms for coupled nonlinear Schrödinger system, J. Comput. Phys., 239 (2013), 30-50. doi: 10.1016/j.jcp.2012.12.036. Google Scholar J. Cai and Y. Wang, Local structure-preserving algorithms for the "good" Boussinesq equation, J. Comput. Phys., 239 (2013), 72-89. doi: 10.1016/j.jcp.2013.01.009. Google Scholar J. Cai, Y. Wang and C. Jiang, Local structure-preserving algorithms for general multi-symplectic Hamiltonian PDEs, Comput. Phys. Comm., 235 (2019), 210-220. doi: 10.1016/j.cpc.2018.08.015. Google Scholar E. Celledoni, V. Grimm, R. I. McLachlan, D. I. McLaren, D. O'Neale, B. Owren and G. R. W. Quispel, Preserving energy resp. dissipation in numerical PDEs using the "average vector field" method, J. Comput. Phys., 231 (2012), 6770-6789. doi: 10.1016/j.jcp.2012.06.022. Google Scholar Q. Cheng, C. Liu and J. Shen, A new lagrange multiplier approach for gradient flows, Comput. Methods Appl. Mech. Engrg., 367 (2020), 113070, 20 pp. doi: 10.1016/j.cma.2020.113070. Google Scholar Q. Cheng, J. Shen and X. Yang, Highly efficient and accurate numerical schemes for the epitaxial thin film growth models by using the SAV approach, J. Sci. Comput., 78 (2019), 1467-1487. doi: 10.1007/s10915-018-0832-5. Google Scholar A. Christlieb, J. Jones, K. Promislow, B. Wetton and M. Willoughby, High accuracy solutions to energy gradient flows from material science models, J. Comput. Phys., 257 (2014), 193-215. doi: 10.1016/j.jcp.2013.09.049. Google Scholar N. Del Buono and C. Mastroserio, Explicit methods based on a class of four stage fourth order Runge–Kutta methods for preserving quadratic laws, J. Comput. Appl. Math., 140 (2002), 231-243. doi: 10.1016/S0377-0427(01)00398-3. Google Scholar M. Doi, Onsager's variational principle in soft matter, J. Phys.: Condens. Matter, 23 (2011), 284118. doi: 10.1088/0953-8984/23/28/284118. Google Scholar D. Furihata, Finite difference schemes for $\partial u/\partial t = (\partial/\partial x)^\alpha\delta G/\delta u$ that inherit energy conservation or dissipation property, J. Comput. Phys., 156 (1999), 181-205. doi: 10.1006/jcph.1999.6377. Google Scholar Y. Gong, J. Cai and Y. Wang, Some new structure-preserving algorithms for general multi-symplectic formulations of Hamiltonian PDEs, J. Comput. Phys., 279 (2014), 80-102. doi: 10.1016/j.jcp.2014.09.001. Google Scholar Z. Guan, J. S. Lowengrub, C. Wang and S. M. Wise, Second order convex splitting schemes for periodic nonlocal Cahn-Hilliard and Allen-Cahn equations, J. Comput. Phys., 277 (2014), 48-71. doi: 10.1016/j.jcp.2014.08.001. Google Scholar M. Guina and S. M. Wang, Molecular Beam Epitaxy, Elsevier, 2013. Google Scholar E. Hairer, C. Lubich and G. Wanner, Geometric Numerical Integration: Structure-Preserving Algorithms for Ordinary Differential Equations, Springer Series in Computational Mathematics, 31. Springer, Heidelberg, 2010. Google Scholar Q. Hong, J. Li and Q. Wang, Supplementary variable method for structure-preserving approximations to partial differential equations with deduced equations, Appl. Math. Lett., 110 (2020), 106576, 9 pp. doi: 10.1016/j.aml.2020.106576. Google Scholar Q. Hong, Y. Wang and Y. Gong, Optimal error estimate of two linear and momentum-preserving Fourier pseudo-spectral schemes for the RLW equation, Numer. Methods Partial Differential Equations, 36 (2020), 394-417. doi: 10.1002/num.22434. Google Scholar L. Huang, Z. Tian and Y. Cai, Compact local structure-preserving algorithms for the nonlinear Schrödinger equation with wave operator, Math. Probl. Eng., 2020 (2020), 4345278, 12 pp. doi: 10.1155/2020/4345278. Google Scholar B. Li and J. Liu, Thin film epitaxy with or without slope selection, European J. Appl. Math., 14 (2003), 713-743. doi: 10.1017/S095679250300528X. Google Scholar Y.-W. Li and X. Wu, Functionally fitted energy-preserving methods for solving oscillatory nonlinear Hamiltonian systems, SIAM J. Numer. Anal., 54 (2016), 2036-2059. doi: 10.1137/15M1032752. Google Scholar J. E. Marsden, G. W. Patrick and S. Shkoller, Multisymplectic geometry, variational integrators, and nonlinear PDEs, Commun. Math. Phys., 199 (1998), 351-395. doi: 10.1007/s002200050505. Google Scholar Z. Mu, Y. Gong, W. Cai and Y. Wang, Efficient local energy dissipation preserving algorithms for the Cahn-Hilliard equation, J. Comput. Phys., 374 (2018), 654-667. doi: 10.1016/j.jcp.2018.08.004. Google Scholar Z. Qiao, Z. Zhang and T. Tang, An adaptive time-stepping strategy for the molecular beam epitaxy models, SIAM J. Sci. Comput., 33 (2011), 1395-1414. doi: 10.1137/100812781. Google Scholar S. Reich, Multi-symplectic Runge-Kutta collocation methods for Hamiltonian wave equations, J. Comput. Phys., 157 (2000), 473-499. doi: 10.1006/jcph.1999.6372. Google Scholar J. Shen, C. Wang, X. Wang and S. M. Wise, Second-order convex splitting schemes for gradient flows with Enrich-Schwoebel type energy: Application to thin film epitaxy, SIAM J. Numer. Anal., 50 (2012), 105-125. doi: 10.1137/110822839. Google Scholar J. Shen and J. Xu, Convergence and error analysis for the scalar auxiliary variable (SAV) schemes to gradient flows, SIAM J. Numer. Anal., 56 (2018), 2895-2912. doi: 10.1137/17M1159968. Google Scholar J. Shen, J. Xu and J. Yang, The scalar auxiliary variable (SAV) approach for gradient flows, J. Comput. Phys., 353 (2018), 407-416. doi: 10.1016/j.jcp.2017.10.021. Google Scholar J. Shen, X. Yang, B. Wetton and M. Willoughby, Numerical approximations of Allen-Cahn and Cahn-Hilliard equations, Disc. Conti. Dyn. Syst. Ser. A, 28 (2010), 1669-1691. doi: 10.3934/dcds.2010.28.1669. Google Scholar S. Sun, J. Li, J. Zhao and Q. Wang, Structure-preserving numerical approximations to a non-isothermal hydrodynamic model of binary fluid flows, J. Sci. Comput., 83 (2020), 50, 43 pp. doi: 10.1007/s10915-020-01229-6. Google Scholar W. Tang and Y. Sun, Time finite element methods: A unified framework for numerical discretizations of ODEs, Appl. Math. Comput., 219 (2012), 2158-2179. doi: 10.1016/j.amc.2012.08.062. Google Scholar Y. Wang and J. Hong, Multi-symplectic algorithms for Hamiltonian partial differential equations, Commun. Appl. Math. Comput, 27 (2013), 163-230. Google Scholar Y. Wang, B. Wang and M. Qin, Local structure-preserving algorithms for partial differential equations, Sci. China Ser. A, 51 (2008), 2115-2136. doi: 10.1007/s11425-008-0046-7. Google Scholar C. Wang, X. Wang and S. M. Wise, Unconditionally stable schemes for equations of thin film epitaxy, Discrete Contin. Dyn. Syst., 28 (2010), 405-423. doi: 10.3934/dcds.2010.28.405. Google Scholar A. Willoughby and P. Capper, Molecular Beam Epitaxy: Materials and Applications for Electronics and Optoelectronics, Springer, 2019. Google Scholar S. M. Wise, C. Wang and J. S. Lowengrub, An energy-stable and convergent finite-difference scheme for the phase field crystal equation, SIAM J. Numer. Anal., 47 (2009), 2269-2288. doi: 10.1137/080738143. Google Scholar X. Yang, Error analysis of stabilized semi-implicit method of Allen-Cahn equation, Disc. Contin. Dyn. Syst. Ser. B, 11 (2009), 1057-1070. doi: 10.3934/dcdsb.2009.11.1057. Google Scholar X.-G. Yang, M. G. Forest and Q. Wang, Near equilibrium dynamics and one-dimensional spatial-temporal structures of polar active liquid crystals, Chin. Phys. B, 23 (2014), 118701. doi: 10.1088/1674-1056/23/11/118701. Google Scholar X. Yang, J. Li, M. G. Forest and Q. Wang, Hydrodynamic theories for flows of active liquid crystals and the generalized Onsager principle, Entropy, 18 (2016), 202, 28 pp. doi: 10.3390/e18060202. Google Scholar X. Yang, J. Zhao and Q. Wang, Numerical approximations for the molecular beam epitaxial growth model based on the invariant energy quadratization method, J. Comput. Phys., 333 (2017), 104-127. doi: 10.1016/j.jcp.2016.12.025. Google Scholar J. Zhao, Q. Wang and X. Yang, Numerical approximations for a phase field dendritic crystal growth model based on invariant energy quadratization approach, Internat. J. Numer. Methods Engrg., 110 (2017), 279-300. doi: 10.1002/nme.5372. Google Scholar Figure 1. The isolines of numerical solutions of $ \phi $ in Example 2 using LEDP-I and LEDP-II, respectively. (a-f) are obtained from LEDP-I while (g-l) from LEDP-II. Snapshots are taken at $ t = 0, 0.05, 2.5, 5.5, 8, 30 $, respectively. The time step is set as $ \tau = 1.0e-3 $ Figure 2. Time evolution of the error in mass and global energy with $ N = 129 $ and $ \tau = 1.0e-3 $ in Example 2 using LEDP-I and LEDP-II, respectively Figure 3. Time evolution of energy and maximal residue of the local energy dissipation law with $ N = 129 $, $ \tau = 1.0e-3 $ and $ \tau $ based on adaptive time stepping algorithm in Example 2 using LEDP-I, LEDP-II, respectively Figure 4. The isolines of numerical solutions of $ \phi $ (left) and its Laplacian $ \Delta \phi $ (right) in Example 3 using LEDP-I. Snapshots are taken at $ t = 0, 5, 10, 20, 40, 80 $. The time and space step are set as $ \tau = 1.0e-3 $ and $ N = 513 $ Figure 5. The isolines of numerical solutions of the $ \phi $ (left) and its Laplacian $ \Delta \phi $ (right) in Example 3 using LEDP-II. Snapshots are taken at $ t = 0, 5, 10, 20, 40, 80 $. The time and space step are set as $ \tau = 1.0e-3 $ and $ N = 513 $ Figure 6. Time evolution of the error in mass, energy and maximal residue with $ N = 513 $ and $ \tau = 1.0e-3 $ in Example 3 using LEDP-I and LEDP-II, respectively Figure 7. The Energy for LEDP-I and LEDP-II via different time steps Figure 8. The numerical results show the proper power law behavior in the decaying energy as $ O(t^{-\frac{1}{3} }) $ and roughness as $ O(t^{\frac{1}{3} }) $ Table 1. Mesh refinement test for LEDP-I at $ t = 1 $ $ N $ $ \tau $ Error Order CPU time $ L^\infty $ error $ L^{2} $ error $ L^\infty $ order $ L^{2} $ order 11 0.1 0.1805 0.5671 – – 6.24e-1 33 1/30 0.0170 0.0535 2.1495 2.1495 8.71e-1 99 1/90 0.0019 0.0058 2.0160 2.0160 4.82 297 1/270 2.0605e-4 6.4733e-4 2.0018 2.0018 5.75e+1 Table 2. Mesh refinement test for LEDP-II at $ t = 1 $ 11 0.1 1.9180e-4 6.0195e-4 – – 1.25e-1 33 1/30 2.1309e-5 6.6866e-5 2.0001 2.0002 2.47e-1 99 1/90 2.3678e-6 7.4296e-6 1.9999 2.0000 2.01 Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129 Adrian Viorel, Cristian D. Alecsa, Titus O. Pinţa. Asymptotic analysis of a structure-preserving integrator for damped Hamiltonian systems. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020407 Luis Caffarelli, Fanghua Lin. Nonlocal heat flows preserving the L2 energy. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 49-64. doi: 10.3934/dcds.2009.23.49 Klemens Fellner, Jeff Morgan, Bao Quoc Tang. Uniform-in-time bounds for quadratic reaction-diffusion systems with mass dissipation in higher dimensions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 635-651. doi: 10.3934/dcdss.2020334 Kohei Nakamura. An application of interpolation inequalities between the deviation of curvature and the isoperimetric ratio to the length-preserving flow. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1093-1102. doi: 10.3934/dcdss.2020385 Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079 Xueli Bai, Fang Li. Global dynamics of competition models with nonsymmetric nonlocal dispersals when one diffusion rate is small. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3075-3092. doi: 10.3934/dcds.2020035 Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002 Tianwen Luo, Tao Tao, Liqun Zhang. Finite energy weak solutions of 2d Boussinesq equations with diffusive temperature. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3737-3765. doi: 10.3934/dcds.2019230 Bo Chen, Youde Wang. Global weak solutions for Landau-Lifshitz flows and heat flows associated to micromagnetic energy functional. Communications on Pure & Applied Analysis, 2021, 20 (1) : 319-338. doi: 10.3934/cpaa.2020268 Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052 Xiaoxiao Li, Yingjing Shi, Rui Li, Shida Cao. Energy management method for an unpowered landing. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020180 Yangrong Li, Shuang Yang, Qiangheng Zhang. Odd random attractors for stochastic non-autonomous Kuramoto-Sivashinsky equations without dissipation. Electronic Research Archive, 2020, 28 (4) : 1529-1544. doi: 10.3934/era.2020080 Xinfu Chen, Huiqiang Jiang, Guoqing Liu. Boundary spike of the singular limit of an energy minimizing problem. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3253-3290. doi: 10.3934/dcds.2020124 Marcello D'Abbicco, Giovanni Girardi, Giséle Ruiz Goldstein, Jerome A. Goldstein, Silvia Romanelli. Equipartition of energy for nonautonomous damped wave equations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 597-613. doi: 10.3934/dcdss.2020364 Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097 Lin Lu Qi Wang Yongzhong Song Yushun Wang
CommonCrawl
Last 3 years (1) Physics and Astronomy (5) Statistics and Probability (4) Materials Research (1) The British Journal of Psychiatry (10) Weed Science (7) Anglo-Saxon England (5) Epidemiology & Infection (4) Journal of Dairy Research (3) Journal of Fluid Mechanics (3) Weeds (2) Cardiology in the Young (1) Environmental Conservation (1) European Psychiatry (1) Geological Magazine (1) Journal of Materials Research (1) Journal of Plasma Physics (1) LMS Journal of Computation and Mathematics (1) The Journal of Agricultural Science (1) The Journal of Laryngology & Otology (1) Twin Research and Human Genetics (1) Work, Employment and Society (1) The University of Adelaide Press (1) Weed Science Society of America (9) Ryan Test (2) AEPC Association of European Paediatric Cardiology (1) European Psychiatric Association (1) International Soc for Twin Studies (1) LMS (1) Materials Research Society (1) The Australian Society of Otolaryngology Head and Neck Surgery (1) test society (1) Cambridge Tracts in Mathematics (33) London Mathematical Society Lecture Note Series (6) Effectiveness and cost-effectiveness of a cardiovascular risk prediction algorithm for people with severe mental illness E. Zomer, D. Osborn, I. Nazareth, R. Blackburn, A. Burton, S. Hardoon, R.I.G. Holt, M. King, L. Marston, S. Morris, R. Omar, I. Petersen, K. Walters, R.M. Hunter Journal: European Psychiatry / Volume 33 / Issue S1 / March 2016 Published online by Cambridge University Press: 23 March 2020, p. S191 Cardiovascular risk prediction tools are important for cardiovascular disease (CVD) prevention, however, which algorithms are appropriate for people with severe mental illness (SMI) is unclear. Objectives/aims To determine the cost-effectiveness using the net monetary benefit (NMB) approach of two bespoke SMI-specific risk algorithms compared to standard risk algorithms for primary CVD prevention in those with SMI, from an NHS perspective. A microsimulation model was populated with 1000 individuals with SMI from The Health Improvement Network Database, aged 30–74 years without CVD. Four cardiovascular risk algorithms were assessed; (1) general population lipid, (2) general population BMI, (3) SMI-specific lipid and (4) SMI-specific BMI, compared against no algorithm. At baseline, each cardiovascular risk algorithm was applied and those high-risk (> 10%) were assumed to be prescribed statin therapy, others received usual care. Individuals entered the model in a 'healthy' free of CVD health state and with each year could retain their current health state, have cardiovascular events (non-fatal/fatal) or die from other causes according to transition probabilities. The SMI-specific BMI and general population lipid algorithms had the highest NMB of the four algorithms resulting in 12 additional QALYs and a cost saving of approximately £37,000 (US$ 58,000) per 1000 patients with SMI over 10 years. The general population lipid and SMI-specific BMI algorithms performed equally well. The ease and acceptability of use of a SMI-specific BMI algorithm (blood tests not required) makes it an attractive algorithm to implement in clinical settings. The authors have not supplied their declaration of competing interest. Signatures of quantum effects on radiation reaction in laser–electron-beam collisions Frontiers in Plasma Physics Conference C. P. Ridgers, T. G. Blackburn, D. Del Sorbo, L. E. Bradley, C. Slade-Lowther, C. D. Baird, S. P. D. Mangles, P. McKenna, M. Marklund, C. D. Murphy, A. G. R. Thomas Journal: Journal of Plasma Physics / Volume 83 / Issue 5 / October 2017 Published online by Cambridge University Press: 11 September 2017, 715830502 Two signatures of quantum effects on radiation reaction in the collision of a ${\sim}$GeV electron beam with a high intensity ( ${>}3\times 10^{20}~\text{W}~\text{cm}^{-2}$) laser pulse have been considered. We show that the decrease in the average energy of the electron beam may be used to measure the Gaunt factor $g$ for synchrotron emission. We derive an equation for the evolution of the variance in the energy of the electron beam in the quantum regime, i.e. quantum efficiency parameter $\unicode[STIX]{x1D702}\not \ll 1$. We show that the evolution of the variance may be used as a direct measure of the quantum stochasticity of the radiation reaction and determine the parameter regime where this is observable. For example, stochastic emission results in a 25 % increase in the standard deviation of the energy spectrum of a GeV electron beam, 1 fs after it collides with a laser pulse of intensity $10^{21}~\text{W}~\text{cm}^{-2}$. This effect should therefore be measurable using current high-intensity laser systems. Island extinctions: processes, patterns, and potential for ecosystem restoration Humans and Island Environments JAMIE R. WOOD, JOSEP A. ALCOVER, TIM M. BLACKBURN, PERE BOVER, RICHARD P. DUNCAN, JULIAN P. HUME, JULIEN LOUYS, HANNEKE J.M. MEIJER, JUAN C. RANDO, JANET M. WILMSHURST Journal: Environmental Conservation / Volume 44 / Issue 4 / December 2017 Published online by Cambridge University Press: 24 July 2017, pp. 348-358 Extinctions have altered island ecosystems throughout the late Quaternary. Here, we review the main historic drivers of extinctions on islands, patterns in extinction chronologies between islands, and the potential for restoring ecosystems through reintroducing extirpated species. While some extinctions have been caused by climatic and environmental change, most have been caused by anthropogenic impacts. We propose a general model to describe patterns in these anthropogenic island extinctions. Hunting, habitat loss and the introduction of invasive predators accompanied prehistoric settlement and caused declines of endemic island species. Later settlement by European colonists brought further land development, a different suite of predators and new drivers, leading to more extinctions. Extinctions alter ecological networks, causing ripple effects for islands through the loss of ecosystem processes, functions and interactions between species. Reintroduction of extirpated species can help restore ecosystem function and processes, and can be guided by palaeoecology. However, reintroduction projects must also consider the cultural, social and economic needs of humans now inhabiting the islands and ensure resilience against future environmental and climate change. Uptake of Copper by Parrotfeather David L. Sutton, R. D. Blackburn Journal: Weed Science / Volume 19 / Issue 3 / May 1971 Published online by Cambridge University Press: 12 June 2017, pp. 282-285 The copper content of emersed parrotfeather (Myriophyllum brasiliense Camb.) roots ranged from 52 to 6,505 ppmw after root applications of copper sulfate pentahydrate (hereinafter referred to as CSP) at 0.25 to 16.0 ppmw of copper, respectively, under greenhouse conditions. The coefficient of correlation for copper in the treatment solution to the copper content of the roots was 0.9502. A slight acropetal movement of copper occurred because of an increase of CSP in solution and an increase in treatment time. Concentrations of CSP at 2.0 ppmw of copper or higher inhibited growth of parrotfeather. Phosphorus levels in the roots of parrotfeather were reduced by root applications of CSP at 1.0 ppmw of copper or higher during a 2-week treatment period. Herbicidal Treatment Effect on Carbohydrate Levels of Alligatorweed L. W. Weldon, R. D. Blackburn Journal: Weed Science / Volume 17 / Issue 1 / January 1969 Published online by Cambridge University Press: 12 June 2017, pp. 66-69 We applied propylene glycol butyl ether esters of 2-(2,4,5-trichlorophenoxy)propionic acid (silvex) and 2,4-dichlorophenoxyacetic acid (2,4-D) to floating alligatorweed (Alternanthera philoxeroides (Mart.) Griseb.) and determined the level of carbohydrates in the underwater stems. The chemicals were applied at 4 and 8 lb/A on five application dates during a growing season at two sites. One month after initial application, the readily acid-hydrolyzable carbohydrates had been depleted by an average of 23.8% in a tidal area and 14.5% in a non-flowering area. Throughout the growing season, levels of carbohydrates were higher in a non-flowing area. The alligatorweed in the tidal area was more susceptible to herbicides. Regrowth from underwater nodes resulted in replenishment of the carbohydrates during the second month following treatment with 2,4-D. Carbohydrate levels remained low for 2 months following treatment with silvex, which resulted in more effective control of alligatorweed. Silvex at 8 lb/A, with retreatment after 2 months, provided the most effective control of floating alligatorweed. Effect of Diquat on Uptake of Copper in Aquatic Plants David L. Sutton, L. W. Weldon, R. D. Blackburn Journal: Weed Science / Volume 18 / Issue 6 / November 1970 Combinations of copper sulfate pentahydrate (hereinafter referred to as CSP) at 1.0 ppmw copper plus 0.1 to 2.0 ppmw 6,7-dihydrodipyrido-(1,2-a:2′,1′-c)pyrazinediium ion (diquat) resulted in higher accumulations of copper in hydrilla (Hydrilla verticillata Casp.), egeria (Egeria densa Planch.), and southern naiad (Najas guadalupensis Spreng. Magnus) when compared to plants which received only CSP. A contact period greater than 24 hr was necessary before the higher amounts of copper were detected in those plants treated with the combinations. Water samples from outdoor, plastic pools 7 days after treatment with CSP at 1.0 ppmw of copper plus 1.0 ppmw of diquat contained 25% less copper than pools treated with CSP. Samples of hydrilla and southern naiad removed 7 days after treatment of the pools with the combination contained 77 and 38% more copper, respectively, than samples from those pools treated with only CSP. Ecology of Submersed Aquatic Weeds in South Florida Canals R. D. Blackburn, P. F. White, L. W. Weldon Journal: Weed Science / Volume 16 / Issue 2 / April 1968 A 3-year ecological study of physical, chemical, and biological factors affecting the growth of submersed aquatic weeds was conducted in four irrigation and drainage canals in south Florida. Growth of aquatic vegetation was influenced by light intensity which decreased in a geometric ratio to water depth. The light penetration into the water was influenced by dissolved coloring matter and turbidity. A high concentration of nitrogen was found in three of the canals. Nitrogen entered the canals as sewage effluent discharged into the water and as fertilizer runoff from surrounding farmland. Density of aquatic vegetation harvested at intervals of 6 months from clipped plots varied from zero in a newly dug canal to 22,000 lb green weight/A in an old established canal. During the 3-year study, there was a complete shift of the vegetation from southern naiad (Najas guadalupensis (Spreng.) Magnus) to Florida elodea (Hydrilla verticillata Casp.) in two of the canals, but there was no change in plant species in two canals. No other chemical and physical factors evaluated in the study limited submersed aquatic weed growth in these canals. Response of Aquatic Plants to Combinations of Endothall and Copper David L. Sutton, R. D. Blackburn, W. C. Barlowe The addition of 0.5 and 5.0 ppmw of 7-oxabicyclo [2.2.1] = heptane-2,3-dicarboxylic acid (endothall) to solutions of copper sulfate pentahydrate (CSP) at 1.0 ppmw of copper applied to the roots of emersed parrotfeather (Myriophyllum brasiliense (Camb.) increased the copper content of the roots of these plants. Growth of parrotfeather was inhibited by root applications of 0.5 and 5.0 ppmw of endothall, but CSP did not increase its phytotoxicity. However, a synergistic effect, as determined by dry weight, was calculated after treatment of hydrilla (Hydrilla verticillata Casp.) with a combination of 5.0 ppmw of endothall plus CSP at 1.0 ppmw of copper. An increase in copper uptake and a reduction in phosphorus levels was associated with those plants treated with the combination. The White Amur for Aquatic Weed Control Jane E. Michewicz, D. L. Sutton, R. D. Blackburn The use of herbivorous fish for the biological control of aquatic weeds has great potential. The susceptibility of most herbivorous fish to low temperature is the principal factor limiting their use in the United States. The white amur (Ctenopharyngodon idella Val.) can tolerate low temperature and other water quality extremes. This fish has a voracious appetite for many aquatic plants, and after attaining a length of 30 mm, it is almost exclusively phytophagous. Factors affecting the feeding habits of the white amur include the species of plants and water temperature. The white amur has been introduced for aquatic weed control in various countries. This fish might ameliorate some of the aquatic weed problems in the United States, and also provide a new source of protein. Effect of Copper on Uptake of Diquat-14C by Hydrilla David L. Sutton, W. T. Haller, K. K. Steward, R. D. Blackburn A linear uptake of 14C-labeled 6,7-dihydrodipyrido (1,2-a:2′,1′-c)pyrazinediium ion (diquat-14C) at 1 ppmw by hydrilla (Hydrilla verticillata Casp.) occurred during a 10-day period under controlled conditions. Plant tissue contained higher amounts of radioactivity after treatment with combinations of 0.1 or 0.5 ppmw of diquat-14C plus 1 ppmw of copper than did tissue of plants in solutions of diquat-14C alone. Plants in a 1 ppmw of diquat-14C solution contained 2,123 cpm/mg after 8 days; however, 5,857 cpm/mg were measured in plant tissue taken after the same period from solutions containing diquat-14C plus 2 ppmw copper as copper sulfate pentahydrate (CSP). An increase in CSP from 3 to 10 ppmw did not significantly increase the uptake of diquat-14C over the 2 ppmw. Results with an organic copper complex (copper sulfate triethanolamine) in combination with diquat-14C were the same as those obtained with diquat-14C plus CSP. 14 - The Oligo-Miocene coal floras of southeastern Australia By D. T. Blackburn, Kinhill Engineers, 186 Greenhill Road, Parkside, Adelaide, South Australia 5063, Australia, I. R. K. Sluiter, Department of Conservation and Natural Resources, State Government Offices, 253 Eleventh Street, Mildura, Victoria 3500, Australia Edited by Robert S. Hill, University of Adelaide Book: History of the Australian Vegetation Published by: The University of Adelaide Press Published online: 25 July 2017 Print publication: 31 March 2017, pp 328-367 This chapter presents the results of the authors' separate and conjoint studies of the palaeobotany, stratigraphy and palaeoecology of the Latrobe Valley Coal Measures, a thick sequence of coal seams and interseam elastics in the Gippsland region of southeastern Australia (Figure 14.1). The Latrobe Valley Coal Measures are of Middle Eocene-Middle Miocene age and were deposited after the separation of Australia from Antarctica, a time of significant climatic change for the Australian continent and a period during which major global changes in sea levels and climates occurred. The Eocene-Miocene was an important time for the development of the modern Australian flora and the Latrobe Valley Coal Measures provide a window into this critical period. The study of the fossil assemblages of the coals has led to a detailed understanding of the vegetation that formed the coal measures and contributes significantly to our knowledge of the development of the modern Australian flora. Previous workers have directed their efforts to particular aspects of the nature and relationships of the coal swamp vegetation. Early workers (Chapman, I925a,b; Deane, 1925; Cookson, 1946, 1947, 1950, 1953, 1957, 1959; Cookson & Duigan, 1950, 1951; Cookson & Pike, 1953a,b; 1954a,b; Pike, 1953; Patton, 1958) concentrated upon the taxonomic affinities of macroscopic and microscopic plant remains. Duigan (1966) studied selected micro- and macrofossil plant taxa from a palaeogeographical, ecological and evolutionary viewpoint. Baragwanath & Kiss (1964), Partridge (1971) and Stover & Partridge (1973) were more concerned with establishing stratigraphic relationships with the microfossils. The last of these studies, of pollen and marine microfossils, developed a long-accepted biostratigraphy for the basin. Recently the biostratigraphy has been revised using more extensive sampling and better correlation with the geology of the basin (Haq et al., 1987; Holdgate, 1985, 1992; Holdgate & Sluiter, 1991). These studies also demonstrated the significance of eustatic sea level changes in influencing sedimentation within the basin and also that significant marine transgressions can be traced in both the lithofacies and the microfossil record. Management of lateral skull base cancer: United Kingdom National Multidisciplinary Guidelines JJ Homer, T Lesser, D Moffat, N Slevin, R Price, T Blackburn Journal: The Journal of Laryngology & Otology / Volume 130 / Issue S2 / May 2016 Published online by Cambridge University Press: 12 May 2016, pp. S119-S124 Print publication: May 2016 This is the official guideline endorsed by the specialty associations involved in the care of head and neck cancer patients in the UK. It provides recommendations on the work up and management of lateral skull base cancer based on the existing evidence base for this rare condition. • All patients with more than one of: chronic otalgia, bloody otorrhoea, bleeding, mass, facial swelling or palsy should be biopsied. (R) • Magnetic resonance and computed tomography imaging should be performed. (R) • Patients should undergo audiological assessment. (R) • Carotid angiography is recommended in select patients. (G) • The modified Pittsburg T-staging system is recommended. (G) • The minimum operation for cancer involving the temporal bone is a lateral temporal bone resection. (R) • Facial nerve rehabilitation should be initiated at primary surgery. (G) • Anterolateral thigh free flap is the workhorse flap for lateral skull base defect reconstruction. (G) • For patients undergoing surgery for squamous cell carcinoma, at least a superficial parotidectomy and selective neck dissection should be carried out. (R) Triadic resonances in precessing rapidly rotating cylinder flows T. Albrecht, H. M. Blackburn, J. M. Lopez, R. Manasseh, P. Meunier Journal: Journal of Fluid Mechanics / Volume 778 / 10 September 2015 Published online by Cambridge University Press: 30 July 2015, R1 Direct numerical simulations of flows in cylinders subjected to both rapid rotation and axial precession are presented and analysed in the context of a stability theory based on the triadic resonance of Kelvin modes. For a case that was chosen to provide a finely tuned resonant instability with a small nutation angle, the simulations are in good agreement with the theory and previous experiments in terms of mode shapes and dynamics, including long-time-scale regularization of the flow and recurrent collapses. Cases not tuned to the most unstable triad, but with the nutation angle still small, are also in quite good agreement with theoretical predictions, showing that the presence of viscosity makes the physics of the triadic-resonance model robust to detuning. Finally, for a case with $45^{\circ }$ nutation angle for which it has been suggested that resonance does not occur, the simulations show that a slowly growing triadic resonance predicted by theory is in fact observed if sufficient evolution time is allowed. By Alessandro Aiuppa, Nick T. Arndt, Jean Besse, Benjamin A. Black, Terrence J. Blackburn, Nicole Bobrowski, Samuel A. Bowring, Seth D. Burgess, Kevin Burke, Ying Cui, Vincent Courtillot, Amy Donovan, Linda T. Elkins-Tanton, Anna Fetisova, Frédéric Fluteau, Kirsten E. Fristad, Lori S. Glaze, Thor H. Hansteen, Morgan T. Jones, Jeffrey T. Kiehl, Nadezhda A. Krivolutskaya, Kirstin Krüger, Lee R. Kump, Steffen Kutterolf, Dimitry V. Kuzmin, Jean-François Lamarque, A. Latyshev, Kimberly V. Lau, Tamsin A. Mather, Katja M. Meyer, Clive Oppenheimer, Vladimir Pavlov, Jonathan L. Payne, Ingrid Ukstins Peate, David Pieri, Sverre Planke, Ulrich Platt, Alexander Polozov, Fred Prata, Gemma Prata, David M. Pyle, Andy Ridgwell, Alan Robock, Ellen K. Schaal, Anja Schmidt, Stephen Self, Christine Shields, Juan Carlos Silva-Tamayo, Alexander V. Sobolev, Stephan V. Sobolev, Henrik Svensen, Trond H. Torsvik, Roman Veselovskiy Edited by Anja Schmidt, University of Cambridge, Kirsten Fristad, Western Washington University, Linda Elkins-Tanton, Arizona State University Book: Volcanism and Global Environmental Change Published online: 05 February 2015 Print publication: 08 January 2015, pp viii-xii The discrete logarithm problem for exponents of bounded height Communication, information Finite fields and commutative rings (number-theoretic aspects) Simon R. Blackburn, Sam Scott Journal: LMS Journal of Computation and Mathematics / Volume 17 / Issue A / 2014 Let $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}G$ be a cyclic group written multiplicatively (and represented in some concrete way). Let $n$ be a positive integer (much smaller than the order of $G$). Let $g,h\in G$. The bounded height discrete logarithm problem is the task of finding positive integers $a$ and $b$ (if they exist) such that $a\leq n$, $b\leq n$ and $g^a=h^b$. (Provided that $b$ is coprime to the order of $g$, we have $h=g^{a/b}$ where $a/b$ is a rational number of height at most $n$. This motivates the terminology.) The paper provides a reduction to the two-dimensional discrete logarithm problem, so the bounded height discrete logarithm problem can be solved using a low-memory heuristic algorithm for the two-dimensional discrete logarithm problem due to Gaudry and Schost. The paper also provides a low-memory heuristic algorithm to solve the bounded height discrete logarithm problem in a generic group directly, without using a reduction to the two-dimensional discrete logarithm problem. This new algorithm is inspired by (but differs from) the Gaudry–Schost algorithm. Both algorithms use $O(n)$ group operations, but the new algorithm is faster and simpler than the Gaudry–Schost algorithm when used to solve the bounded height discrete logarithm problem. Like the Gaudry–Schost algorithm, the new algorithm can easily be carried out in a distributed fashion. The bounded height discrete logarithm problem is relevant to a class of attacks on the privacy of a key establishment protocol recently published by EMVCo for comment. This protocol is intended to protect the communications between a chip-based payment card and a terminal using elliptic curve cryptography. The paper comments on the implications of these attacks for the design of any final version of the EMV protocol. Aging as Accelerated Accumulation of Somatic Variants: Whole-Genome Sequencing of Centenarian and Middle-Aged Monozygotic Twin Pairs Kai Ye, Marian Beekman, Eric-Wubbo Lameijer, Yanju Zhang, Matthijs H. Moed, Erik B. van den Akker, Joris Deelen, Jeanine J. Houwing-Duistermaat, Dennis Kremer, Seyed Yahya Anvar, Jeroen F. J. Laros, David Jones, Keiran Raine, Ben Blackburne, Shobha Potluri, Quan Long, Victor Guryev, Ruud van der Breggen, Rudi G. J. Westendorp, Peter A. C. 't Hoen, Johan den Dunnen, Gert Jan B. van Ommen, Gonneke Willemsen, Steven J. Pitts, David R. Cox, Zemin Ning, Dorret I. Boomsma, P. Eline Slagboom Journal: Twin Research and Human Genetics / Volume 16 / Issue 6 / December 2013 Published online by Cambridge University Press: 04 November 2013, pp. 1026-1032 It has been postulated that aging is the consequence of an accelerated accumulation of somatic DNA mutations and that subsequent errors in the primary structure of proteins ultimately reach levels sufficient to affect organismal functions. The technical limitations of detecting somatic changes and the lack of insight about the minimum level of erroneous proteins to cause an error catastrophe hampered any firm conclusions on these theories. In this study, we sequenced the whole genome of DNA in whole blood of two pairs of monozygotic (MZ) twins, 40 and 100 years old, by two independent next-generation sequencing (NGS) platforms (Illumina and Complete Genomics). Potentially discordant single-base substitutions supported by both platforms were validated extensively by Sanger, Roche 454, and Ion Torrent sequencing. We demonstrate that the genomes of the two twin pairs are germ-line identical between co-twins, and that the genomes of the 100-year-old MZ twins are discerned by eight confirmed somatic single-base substitutions, five of which are within introns. Putative somatic variation between the 40-year-old twins was not confirmed in the validation phase. We conclude from this systematic effort that by using two independent NGS platforms, somatic single nucleotide substitutions can be detected, and that a century of life did not result in a large number of detectable somatic mutations in blood. The low number of somatic variants observed by using two NGS platforms might provide a framework for detecting disease-related somatic variants in phenotypically discordant MZ twins. The changing antibiotic susceptibility of bloodstream infections in the first month of life: informing antibiotic policies for early- and late-onset neonatal sepsis R. M. BLACKBURN, N. Q. VERLANDER, P. T. HEATH, B. MULLER-PEBODY Journal: Epidemiology & Infection / Volume 142 / Issue 4 / April 2014 This study describes the association between antibiotic resistance of bacteria causing neonatal bloodstream infection (BSI) and neonatal age to inform empirical antibiotic treatment guidelines. Antibiotic resistance data were analysed for 14 078 laboratory reports of bacteraemia in neonates aged 0–28 days, received by the Health Protection Agency's (now Public Health England) voluntary surveillance scheme for England and Wales between January 2005 and December 2010. Linear and restricted cubic splines were used in logistic regression models to estimate the nonlinear relationship between age and resistance; the significance of confounding variables was assessed using likelihood ratio tests. An increase in resistance in bacteria causing BSI in neonates aged <4 days was observed, which was greatest between days 2–3 and identified an age (4–8 days, depending on the antibiotic) at which antibiotic resistance plateaus to almost unchanging levels. Our results indicate important age-associated changes in antibiotic resistance and support current empirical treatment guidelines. Edited by Simon R. Blackburn, Royal Holloway, University of London, Stefanie Gerke, Royal Holloway, University of London, Mark Wildon, Royal Holloway, University of London Book: Surveys in Combinatorics 2013 Print publication: 27 June 2013, pp v-vi Surveys in Combinatorics 2013 Edited by Simon R. Blackburn, Stefanie Gerke, Mark Wildon This volume contains nine survey articles based on the invited lectures given at the 24th British Combinatorial Conference, held at Royal Holloway, University of London in July 2013. This biennial conference is a well-established international event, with speakers from around the world. The volume provides an up-to-date overview of current research in several areas of combinatorics, including graph theory, matroid theory and automatic counting, as well as connections to coding theory and Bent functions. Each article is clearly written and assumes little prior knowledge on the part of the reader. The authors are some of the world's foremost researchers in their fields, and here they summarise existing results and give a unique preview of cutting-edge developments. The book provides a valuable survey of the present state of knowledge in combinatorics, and will be useful to researchers and advanced graduate students, primarily in mathematics but also in computer science and statistics. Print publication: 27 June 2013, pp i-iv
CommonCrawl
Detailed Cutcell Representation Active Simulation Control Contact Line Pinning Dynamic Droplet Fluid Structure Interaction Granular Media Immersed Boundary Method Mooring Lines Moving Objects Multiphase Flows Porous Media Salt Dissolution Viscoelastic Flow Metal Casting Models Air Entrapment Chemistry-based Solidification Model Cooling Channels Cooling and Feeding System Design Core Gas Configurable Simulation Monitor Die Spray Cooling Sand Core Making Squeeze Pins Thermal Stress Evolution Water & Environmental Models Air Entrainment Hybrid Shallow Water/3D Flow Hydraulic Boundary Conditions Sediment Transport Model Sludge Settling Model Tailings Model The fractional cell area and volume method, FAVOR™, enables FLOW-3D to efficiently simulate flow through and around user-defined geometry, without needing unstructured, body-fitted grids. It does this by formulating the governing equations in terms of local porosity functions, which correspond to the open volume and area fractions of the grid cells. In cutcells, where geometry intersects the grid, the porosity functions are used to approximate the surface location, surface orientation, and surface area. In certain situations, these approximations can limit the accuracy of viscous boundary layers, especially along solid surfaces that cannot be aligned with mesh planes. Wall shear stress is especially sensitive to the surface treatment, and skin friction distributions generated with cutcell methods can be noisy. The core issue at hand can be resolved by embedding a more detailed representation of the cutcell geometry in the solution (Kirkpatrick, Armfield, & Kent, 2003) (Berger & Aftosmis, 2012). The precise geometry defining each cutcell is determined from a sequence of cell-plane intersections during the preprocessing step and replaces the standard approximations made in the discrete equations. This fundamentally improves the accuracy of momentum fluxes along solid boundaries, reducing noise and improving solution quality. A new solver option now enables this detailed cutcell representation in FLOW-3D, as an additional layer on top of the standard FAVOR™ area and volume fractions. Importantly, the governing partial differential equations being solved remain the same, and the overall robustness of the solver is unchanged. In the 2022R1 release, the detailed cutcell representation is restricted to Cartesian mesh blocks and non-moving, non-porous components. It is also not currently compatible with the immersed boundary method (IBM) (Liang, 2018). For the time being, it is recommended to use detailed cutcells when viscous boundary layer effects are important and IBM in advection dominated situations. Several tests have been performed to benchmark the accuracy of the detailed cutcell representation in FLOW-3D 2022R1. Laminar flow past a circular cylinder at Re=40 Incompressible flow around a circular cylinder at Reynolds number, $latex \displaystyle Re=\frac{\rho {{U}_{0}}d}{\mu }=40$ is a standard test of boundary layer accuracy (Gautier, Biau, & Lamballais, 2013). Here U0 is the free stream velocity, d is the cylinder diameter, ρ is the fluid density and μ is the fluid viscosity. The flow was simulated using three different grids with spacing, $latex \displaystyle \frac{d}{{{\Delta }_{1}}}=40,\frac{d}{{{\Delta }_{2}}}=20,\frac{d}{{{\Delta }_{3}}}=10$ Figure 1a: Grid convergence of Cd Figure 1b: Cp distribution Figure 1c: Cf distribution Figure 1: Laminar flow past a cylinder at Re=40. The steady state drag coefficient was calculated from the results as, $latex \displaystyle {{C}_{D}}=\frac{{{F}_{D}}}{\frac{1}{2}\rho U_{0}^{2}{d}'}$, where FD is the total drag force (pressure plus shear). Results on the three grids show convergence under grid refinement to the center of the range reported by (Gautier, Biau, & Lamballais, 2013) and agree well with other Cartesian grid solutions, including the IBM results from (Tseng & Ferziger, 2003). The pressure coefficient, $latex \displaystyle {{C}_{p}}\left( \theta \right)=\frac{p\left( \theta \right)-{{p}_{0}}}{\frac{1}{2}\rho U_{0}^{2}}$, and skin friction coefficient, $latex \displaystyle {{C}_{f}}(\theta )=\frac{\left| \tau \left( \theta \right) \right|}{\frac{1}{2}\rho U_{0}^{2}}$, were also calculated for each simulation and plotted as a function of angle from the stagnation point, θ. In these results τ is the wall shear stress, p is the surface pressure, and p0 is the far-field pressure. Agreement with experimental pressure measurements (Grove, Shair, & Petersen, 1964) and shear stress calculations from a body-fitted solver (Tseng & Ferziger, 2003) is very good. Turbulent flow around a NACA0012 airfoil Flow around a NACA0012 airfoil was simulated at $latex \displaystyle Re=\frac{\rho {{U}_{0}}c}{\mu }=2.88\times {{10}^{6}}$, where c is the airfoil chord length. The foil is located at the center of a 60c×60c computational domain, with an upstream velocity inlet, downstream pressure outlet, and a 10° angle of attack. Flow near the surface of the foil is discretized using a grid size, $latex \displaystyle \frac{c}{\Delta }=128$. Grid spacing is made progressively coarser away from the foil, by employing a total of 5 nested mesh blocks, each with double/half the spacing of the parent/child mesh. The RNG turbulence model was enabled with dynamic length scale calculation. This is the same Re and foil orientation as the fully turbulent experiments of (Gregory & O'Reilly, 1970), where turbulence was tripped at the leading edge (no laminar transition). Surface profiles of the steady state pressure coefficient are compared with both experimental data (Gregory & O'Reilly, 1970) and numerical results from the body-fitted SU2 solver (Economon, Palacios, Copeland, Lukaczyk, & Alonso, 2016) in Figure 2b. Note that the SU2 results are at a somewhat higher Reynolds number of Re=6×106. The FLOW-3D results for surface pressure coefficient agree very well with both datasets. Further comparison of the skin friction coefficient on the upper foil surface with the SU2 solver is made in Figure 2c. Agreement is good, and the mismatch in peak shear stress between the two results is likely due to the different Reynolds numbers and also the coarse grid in the FLOW-3D model, which was not refined to capture the thin boundary layer at the airfoil's leading edge. Figure 2a: Steady state pressure contours Figure 2b: Pressure Coefficient Figure 2c: Skin Friction Coefficient Figure 2: Turbulent flow around a NACA0012 airfoil at 10° angle of attack Smooth and rough wall pipe flow Steady, incompressible flow through a circular pipe with diameter, d, and surface roughness, ε, was simulated. In this case, the dimensionless shear stress or friction factor, $latex \displaystyle f\left( Re,\frac{\varepsilon }{d} \right)=\frac{8\tau }{\rho U_{0}^{2}}$, relates the shear stress to Reynolds number, $latex \displaystyle Re=\frac{\rho {{U}_{0}}d}{\mu }$ and relative roughness, $latex \displaystyle \frac{\varepsilon }{d}$. Here U0 is the mean velocity in the pipe. For laminar flow (Re≤2100) the friction factor in smooth pipes is, $latex \displaystyle f\left( Re \right)=\frac{64}{Re}$. For turbulent flow through rough pipes, the Swamee-Jain (Swamee & Jain, 1976) equation provides an explicit approximation to the implicit Colebrook equation, $latex \displaystyle f\left( Re,\frac{\varepsilon }{d} \right)=\frac{0.25}{{{\left( {{\log }_{10}}\left( \frac{\varepsilon /d}{3.7}+\frac{5.74}{R{{e}^{0.9}}} \right) \right)}^{2}}}$. It is valid for 5000 ≤ Re ≤ 108 and 10-6 ≤ $latex \displaystyle \frac{\varepsilon }{d}$ ≤ 0.05. The setup of this problem in FLOW-3D is shown in Figure 3. Flow travels through the pipe in the x-direction, driven by the gravitational body force, g. Periodic boundaries are used in the flow direction to eliminate pipe entrance effects. The mesh is uniform in the yz plane with $latex \displaystyle \frac{d}{\Delta }=20$ and a total of 3 cells are used in the periodic x direction (the solution is not sensitive to the number of cells in x). Three laminar Reynolds numbers, ReLaminar = [500, 1000, 2000] and six turbulent Reynolds numbers, ReTurbulent = [10000, 40000, 160000, 640000, 2560000, 10240000] were targeted by applying an appropriate body force. For each turbulent Reynolds number, a separate simulation was set up for five values of relative roughness, $latex \displaystyle \frac{\varepsilon }{d}=$[0, 0.0001, 0.001, 0.01, 0.05]. All turbulent flow simulations used the RNG turbulence model with dynamic length scale calculation. The steady state shear stress and mean velocity were used to compute the actual Reynolds number and actual friction factor from each result. Figure 3a: Simulation Setup Figure 3b: Steady state velocity magnitude Figure 3: Illustration of the periodic pipe flow simulations. (a) shows the simulation setup in FLOW-3D . (b) shows contours of velocity magnitude at steady state for Re=10M, and the mesh resolution in the pipe cross section $latex \displaystyle \frac{d}{\Delta }=20$ Figure 4a Figure 4b Figure 4: Pipe flow simulation results. (a) shows friction factor as a function of Reynolds number for pipes having different roughness, $latex \displaystyle \frac{\varepsilon }{d}$. The solid lines are the semi-empirical equations 1 and 2. Each symbol corresponds to a single FLOW-3D simulation. (b) Circumferential variation of shear stress, τ, for the case Re=160,000, $latex \displaystyle \frac{\varepsilon }{d}$= 0.001 Friction factor is plotted as a function of Reynolds number in Figure 4a alongside the analytic/empirical curves (Equations 1 and 2) for each value of relative roughness, $latex \displaystyle \frac{\varepsilon }{d}$. Agreement with the laminar correlation is excellent and nearly all turbulent results are within the ≈10% uncertainty range associated with Equation 2. Figure 4b shows the circumferential variation of the shear stress for all points on the surface of the pipe for the case Re=160,000, $latex \displaystyle \frac{\varepsilon }{d}$ = 0.001 (other conditions are qualitatively similar). The expected result is a constant value of shear stress, τ=0.002687. FLOW-3D matches this well, with very little circumferential variation around the pipe. Boundary layer development on an inclined flat plate Development of a two-dimensional boundary layer along a flat plate was simulated in both the laminar and turbulent regime. The problem is shown schematically in Figure 5a. Flow with velocity magnitude, U0, enters the domain parallel to the wall surface through the left and top boundaries. The fluid is incompressible with constant density, ρ, and viscosity, μ. A boundary layer develops starting at the leading edge of the no-slip surface with length, L. A free-slip surface of length 0.5L helps to condition the flow upstream. Flow leaves the domain through a pressure outflow boundary. Boundary layer development is dependent on the Reynolds number, $latex \displaystyle R{{e}_{x}}=\frac{\rho {{U}_{o}}x}{\mu }$, where x is the distance measured along the no-slip plate from its leading edge as shown. Semi-analytic expressions for the boundary layer thickness, δ, and skin friction, $latex \displaystyle {{C}_{f}}=\frac{{{\tau }_{w}}}{\frac{1}{2}\rho U_{0}^{2}}$, can be obtained from the Blasius solution, Laminar Flow $latex \displaystyle \frac{\delta }{x}=\frac{5.0}{Re_{x}^{1/2}}$ Turbulent Flow $latex \displaystyle \frac{\delta }{x}=\frac{0.37}{Re_{x}^{1/5}}$ Laminar Flow $latex \displaystyle {{C}_{f}}=\frac{0.664}{Re_{x}^{1/2}}$ Turbulent Flow $latex \displaystyle {{C}_{f}}=\frac{0.058}{Re_{x}^{1/5}}$ Expressions for drag coefficient, CD, are obtained by integrating $latex \displaystyle {{C}_{f}}(x)$ from 0 to L. $latex \displaystyle {{C}_{D}}=\frac{{{F}_{D}}}{\frac{1}{2}\rho U_{0}^{2}wL}=\frac{1.328}{Re_{x}^{1/2}}$ Here, FD is the viscous drag force acting parallel to the plate with width, w, in the y direction (not shown). The problem is simulated for both laminar and turbulent conditions using two coordinate systems: one aligned with the x-z mesh planes, and one rotated by 15° as shown below. The latter configuration demonstrates the ability of cutcell methods to simulate boundary layers on surfaces that cannot be aligned with the mesh (Berger & Aftosmis, 2012) (Harada, Tamaki, Takahashi, & Imamura, 2017). The maximum Reynolds number occurs at x=L and is ReL=104 for the laminar cases and 107 for the turbulent cases. A series of cubic, nested grids were used to resolve the boundary layers. In the laminar case, the grid spacing within the boundary layer is $latex \displaystyle \Delta =\frac{L}{800}$, which results in 40 points across the boundary layer at x=L, $latex \displaystyle \frac{\delta (x=L)}{\Delta }=40$. Smaller grid spacing is used in the turbulent cases, where $latex \displaystyle \Delta =\frac{L}{2048}$ and $latex \displaystyle \frac{\delta (x=L)}{\Delta }=30$. In the turbulent cases, 100 ≤ y+ ≤ 200, for the majority of the plate in the turbulent simulations. Second order discretization was used as well as the RNG turbulence model was for the high Re cases. The skin friction coefficient, CF(x), is plotted as a function of Rex in Figure 5b-c. Excellent agreement with the laminar Blasius solution (Equation 4) is obtained for the both the aligned and 15° configurations. The differences occurring for Rex ≤ 102 are due to the insufficient resolution of the boundary layer along the leading 1% of the plate's length. The turbulent simulations are somewhat more challenging due to the non-linearity of the turbulent wall model used by FLOW-3D. Nonetheless, a good match between the aligned and 15° cases is obtained. Differences between the two cases and the Blasius correlation are within ∼10% for most of the plate. Figure 5b: Laminar, ReL=104 Figure 5c: Turbulent, ReL= 107 Figure 5: Flat plate boundary layer development. The total shear force on the flat plate was extracted from each simulation and used to compute the drag coefficient. These results are given in Table 1 and compared to the analytic results (Equation 4). Overall, good agreement is obtained with the Blasius solution. The mesh orientation does not have a strong impact on the integrated drag coefficient in either laminar or turbulent flow. Table 1: Drag coefficient for flat plate boundary layer simulations Laminar, ReL=104 Turbulent, ReL=107 Blasius 0° 15° Blasius 0° 15° CD .01328 0.01420 0.01375 .002866 .002931 .002872 Vortex shedding in the wake of a beveled plate Vortex shedding in the wake of a beveled flat pate was simulated, as shown schematically in Figure 6a. These simulations were setup to mimic the experiments of (Chen & Fang, 1996). Flow enters a 2D channel with uniform velocity, U0, and encounters a beveled plate. In these simulations, L=1m, w=0.2m β=60°, and 0°≤ α ≤ 90°. The Reynolds number, $latex \displaystyle \operatorname{Re}=\frac{\rho {{U}_{0}}H}{\mu }=2\times {{20}^{4}}$ was held fixed. Second order discretization was used with RNG turbulence model in all simulations. Figure 6a: Problem Schematic Figure 6b: Snapshot of TKE contours for the α=50° case Figure 6c: Dominant Strouhal number associated with vortex shedding in the wake Figure 6: Vortex shedding in the wake of a beveled plate. Each simulation was run for a total of $latex \displaystyle \bar{t}=\frac{t{{U}_{0}}}{H}=150$ dimensionless time units. For angles of inclination, α > 10° a regular pattern of vortex shedding can be seen in then the wake of the plate. This is illustrated qualitatively by a snapshot of TKE for the α=50° case in Figure 6b. The dominant frequency of vortex shedding in was estimated by counting the peaks in the time history of velocity magnitude at the same probe points used in the experiment. This was used to compute the associated Strouhal number, $latex \displaystyle St=\frac{fH}{{{U}_{0}}}$, which is plotted next to the experimental data in Figure 6c. Good agreement with the measured Strouhal number is achieved for all simulations where vortex shedding was predicted. For α= 0° and 10° the simulated flow behind the streamlined body did not exhibit a regular vortex shedding pattern. In these cases, a Large Eddy Simulation approach may be more capable of predicting the unsteady nature of the flow. Berger, M., & Aftosmis, M. (2012). Progress towards a Cartesian cut-cell method for viscous compressible flow. 50th AIAA Aerospace Sciences Meeting Including the New Horizons Forum and Aerospace Exposition, (p. 1301). Economon, T. D., Palacios, F., Copeland, S. R., Lukaczyk, T. W., & Alonso, J. J. (2016). SU2: An open-source suite for multiphysics simulation and design. AIAA Journal, 54, 828–846. Gautier, R., Biau, D., & Lamballais, E. (2013). A reference solution of the flow over a circular cylinder at Re= 40. Computers & Fluids, 75, 103–111. Gregory, N., & O'Reilly, C. L. (1970, 1). Low-speed aerodynamic characteristics of NACA 0012 aerofoil section, including the effects of upper-surface roughness simulating hoar frost. Tech. rep., NASA. Grove, A. S., Shair, F. H., & Petersen, E. E. (1964). An experimental investigation of the steady separated flow past a circular cylinder. Journal of Fluid Mechanics, 19, 60–80. Kirkpatrick, M. P., Armfield, S. W., & Kent, J. H. (2003). A representation of curved boundaries for the solution of the Navier–Stokes equations on a staggered three-dimensional Cartesian grid. Journal of Computational Physics, 184, 1–36. Liang, Z. (2018, 10). Immersed Boundary Method for FLOW-3D. Technical Note, Flow Science, Inc. Swamee, P. K., & Jain, A. K. (1976). Explicit equations for pipe-flow problems. Journal of the hydraulics division, 102, 657–664. Tseng, Y.-H., & Ferziger, J. H. (2003). A ghost-cell immersed boundary method for flow in complex geometry. Journal of computational physics, 192, 593–623.
CommonCrawl
Andrius Kulikauskas [email protected] Eičiūnų km, Alytaus raj, Lithuania My original work is in the Public Domain for you to use in your best judgement. Patreon... in 2022 Paypal to [email protected] Examples of adjunction Restriction and induction Restriction and induction are an important case to consider because they are both left and right adjoint to each other. Definition of the category {$\mathbf{Rep}_G$} Fix a field {$K$}. Given a group {$G$}, we define the category of representations {$\mathbf{Rep}_G$} as follows. An object is a pair {$(V,\phi_V)$} consisting of a vector space over {$K$} and a group homomorphism {$\phi_V:G\rightarrow GL(V,K)$}. A morphism is an equivariant map, which is to say, a linear map {$\beta:(V,\phi_V)\rightarrow (W,\psi_W)$} such that {$\beta \circ \phi_V(g)=\psi_W(g)\circ \beta$} for all {$g\in G$}, as in the commutative diagram below. {$$\begin{matrix} & V & \overset{\beta}\longrightarrow & W \\ g\rightarrow\phi_V(g) & \downarrow & & \downarrow & \psi_W(g)\leftarrow g\\ & V & \overset{\beta}\longrightarrow & W \end{matrix}$$} Note that the morphisms {$\beta$}, {$\phi_V(g)$}, {$\psi_W(g)$} in the commutative diagram below are all linear transformations. Indeed, {$\beta$} is fixed and serves for all {$g\in G$}. I think of it as an interspatial map from {$V$} to {$W$} that expresses the clockwork of {$\phi_V$} in {$GL(V,K)$} in terms of the clockwork {$\psi_W$} in {$GL(W,K)$}. {$\phi_V$} and {$\psi_W$} are said to be conjugate. Note also that the morphism takes us from one representation to another represention and yet it is an equivariant map which is a linear transformation that takes us from one vector space to another vector space. A composition of equivariant maps {$\alpha:(U,\theta_U)\rightarrow(V,\phi_V)$} and {$\beta:(V,\phi_V)\rightarrow (W,\psi_W)$} yields an equivariant map {$\beta\circ\alpha:(U,\theta_U)\rightarrow (W,\psi_W)$} such that {$\beta \circ \alpha \circ \theta_U(g)=\psi_W(g) \circ \beta \circ \alpha$} for all {$g\in G$}. {$$\begin{matrix} & U & \overset{\alpha}\longrightarrow & V & \overset{\beta}\longrightarrow & W \\ \theta_U(g) & \downarrow & & \downarrow & \phi_V(g) & \downarrow & \psi_W(g)\\ & U & \overset{\alpha}\longrightarrow & V & \overset{\beta}\longrightarrow & W \end{matrix}$$} Examples of objects (representation) when {$G=S_3$} Consider some examples of representations of {$S_3$} having elements {$()$}, {$(12)$}, {$(13)$}, {$(23)$}, {$(123)$}, {$(132)$}. Here is the multiplication table: {$$ \begin{matrix} \mathbf{\times} & \mathbf{()} & \mathbf{(12)} & \mathbf{(13)} & \mathbf{(23)} & \mathbf{(123)} & \mathbf{(132)} \\ \mathbf{()} & () & (12) & (13) & (23) & (123) & (132) \\ \mathbf{(12)} & (12) & () & (132) & (123) & (23) & (13) \\ \mathbf{(13)} & (13) & (123) & () & (132) & (12) & (23) \\ \mathbf{(23)} & (23) & (132) & (123) & () & (13) & (12) \\ \mathbf{(123)} & (123) & (13) & (23) & (12) & (132) & () \\ \mathbf{(132)} & (132) & (23) & (12) & (13) & () & (123) \\ \end{matrix} $$} A permutation representation: {$\mathbf{()}$} {$\mathbf{(12)}$} {$\mathbf{(13)}$} {$\mathbf{(23)}$} {$\mathbf{(123)}$} {$\mathbf{(132)} $} {$\mathbf{PR}: $} {$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}$} {$\begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} $} {$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}$} {$\begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}$} {$\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix}$} {$\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$} Here I want to point out a technical detail that I have lost many hours over and am still not completely sure of. It is a problem that comes up because I like to use cycle notation because it is compact. However, it has caused me great confusion. The cycle notation for the permutation composes from left to right, {$(12)(13)=(123)$}, which means that {$1\rightarrow 2$}, {$2\rightarrow 1\rightarrow 3$}, {$1\rightarrow 3$}. The multiplication of permutation matrices also composes by matrix multiplication from left to right but we must use the inverses! and we order the matrices from right to left because they are acting on the column vector on the right. Which is to say, if {$g_1=(12)$}, {$g_2=(13)$}, {$g_1g_2=(123)$}, then we map {$g$} to the inverse permutation matrix so that {$g_2^{-1}g_1^{-1}=(g_1g_2)^{-1}$} as below: {$$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$$} Instead, it is simpler to think of the permutation as a reordering of a string, for example, {$123\rightarrow 312$} or simply {$[312]$}. Then this composes in the same way as matrix multiplication and so we can read from left to right {$[321]\cdot [213]=[312]$}, or in cycle notation: {$(13)(12)=(132)$}, or {$\rho((13))\rho((12))=\rho((132))$} or {$\rho((132))_{ik}=\sum_j \rho((13))_{ij} \rho((12))_{jk}$}. A left regular representation: {$\mathbf{RR}: \begin{pmatrix} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 & 0 & 0 \end{pmatrix} $} The permutation representation and the regular representation are reducible. An irreducible representation has no invariant subspace and so its dimension cannot be too large. The number of irreducible complex representations is equal to the number of conjugacy classes. See: J.P.Serre. Linear Representations of Finite Groups. A study of the regular representation shows that it contains every irreducible representation {$W_i$} with multiplicity equal to its degree {$n_i$}. The trivial representation: {$\mathbf{TR}: (1)$}, {$(1)$}, {$(1)$}, {$(1)$}, {$(1)$}, {$(1)$}. The alternating representation: {$\mathbf{AR}: (1)$}, {$(-1)$}, {$(-1)$}, {$(-1)$}, {$(1)$}, {$(1)$}. The standard representation is derived from the permutation representation by considering the complement of the one-dimensional trivial representation on the invariant subspace spanned by {$(1,\dots,1)$}. The standard representation can be written in several noteworthy ways. Here are some from the GroupProps wiki: {$ \mathbf{SR1} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix} \begin{pmatrix} -1 & 1 \\ -1 & 0 \end{pmatrix} $} {$ \mathbf{SR2} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1/2 & \sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \end{pmatrix} \begin{pmatrix} -1/2 & -\sqrt{3}/2 \\ -\sqrt{3}/2 & 1/2 \end{pmatrix} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} -1/2 & -\sqrt{3}/2 \\ \sqrt{3}/2 & -1/2 \end{pmatrix} \begin{pmatrix} -1/2 & \sqrt{3}/2 \\ -\sqrt{3}/2 & -1/2 \end{pmatrix} $} {$ \mathbf{SR3} \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & e^{2\pi i/3} \\ e^{-2\pi i/3} & 0 \end{pmatrix} \begin{pmatrix} 0 & e^{-2\pi i/3} \\ e^{2\pi i/3} & 0 \end{pmatrix} \begin{pmatrix} e^{2\pi i/3} & 0 \\ 0 & e^{-2\pi i/3} \end{pmatrix} \begin{pmatrix} e^{-2\pi i/3} & 0 \\ 0 & e^{2\pi i/3} \end{pmatrix} $} Examples of morphisms (equivariant maps) when {$G=S_3$} Given the representations {$\mathbf{SR1}$}, {$\mathbf{SR2}$}, {$\mathbf{SR3}$} from the previous section, consider the equivariant maps {$\alpha_{ij}:\mathbf{SRi}\rightarrow \mathbf{SRj}$}. They are given by the matrices below, where {$C$} is an arbitrary scalar constant: {$\alpha_{12}= C\begin{pmatrix} 2 & 0 \\ 1 & \sqrt{3} \end{pmatrix} $} {$\alpha_{13}= C\begin{pmatrix} -u & -u^2 \\ u & 1 \end{pmatrix} $} where {$u=e^{\frac{2\pi i}{3}}$}. {$\alpha_{23}= ... $} When {$W=V$}, the equivariant map {$\alpha$} is a square matrix, and if it is an invertible matrix, we have that {$\phi_V(g) = \alpha \psi_V(g) \alpha^{-1}$}. This action takes us from representation {$\psi_V$} to representation {$\phi_V$} by changing the basis. Note also that the zero map is an equivariant map. Calculating a morphism (an equivariant map) An equivariant map is fixed for all {$g\in G$}. When the representations are finite dimensional, then the equivariant map is a matrix. Here is an example of calculating an equivariant map {$\alpha_{12}:\mathbf{SR1}\rightarrow \mathbf{SR2}$} as a matrix {$a_{ij}$}. {$\begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} -\frac{1}{2} & \frac{\sqrt{3}}{2} \\ \frac{\sqrt{3}}{2} & \frac{1}{2} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}$} {$\begin{pmatrix} (-a_{11}+a_{21})v_1 + (-a_{12}+a_{22})v_2 \\ a_{21}v_1 + a_{22}v_2 \end{pmatrix} = \begin{pmatrix} (-\frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12})v_1 + (\frac{\sqrt{3}}{2}a_{11} + \frac{1}{2}a_{12})v_2 \\ (-\frac{1}{2}a_{21} + \frac{\sqrt{3}}{2}a_{22})v_1 + (\frac{\sqrt{3}}{2}a_{21} + \frac{1}{2}a_{22})v_2 \end{pmatrix}$} {$a_{21}=-\frac{1}{2}a_{21} + \frac{\sqrt{3}}{2}a_{22} \Rightarrow a_{22}=\sqrt{3}a_{21} $} and {$a_{22}=\frac{\sqrt{3}}{2}a_{21} + \frac{1}{2}a_{22} \Rightarrow a_{22}=\sqrt{3}a_{21} $} {$-a_{11}+a_{21} = -\frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12} \Rightarrow a_{21} = \frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12}$} and {$-a_{12}+a_{22} = \frac{\sqrt{3}}{2}a_{11} + \frac{1}{2}a_{12} \Rightarrow a_{22} = \frac{\sqrt{3}}{2}a_{11} + \frac{3}{2}a_{12}$} {$\begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{pmatrix} \begin{pmatrix} -\frac{1}{2} & -\frac{\sqrt{3}}{2} \\ -\frac{\sqrt{3}}{2} & \frac{1}{2} \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}$} {$a_{21} = \frac{1}{2}a_{11} + \frac{\sqrt{3}}{2}a_{12}$} as above {$a_{22} = \frac{\sqrt{3}}{2}a_{11} - \frac{1}{2}a_{12}$} when combined with {$a_{22} = \frac{\sqrt{3}}{2}a_{11} + \frac{3}{2}a_{12}$} from above implies {$a_{12}=0$} {$a_{11} = \frac{1}{2}a_{21} + \frac{\sqrt{3}}{2}a_{22}$} with {$a_{22}=\sqrt{3}a_{21}$} yields {$a_{11}=2a_{21}$} {$a_{12} = \frac{\sqrt{3}}{2}a_{21} - \frac{1}{2}a_{22}$} with {$a_{22}=\sqrt{3}a_{21}$} yields {$a_{12}=0$} Thus we have that this equivariant map is: {$a_{12}\begin{pmatrix} 2 & 0 \\ 1 & \sqrt{3} \end{pmatrix}$} Illustrating composition of morphisms when {$G=S_3$} Consider {$T=\mathbb{C}^6$}, {$U=\mathbb{C}^3$}, {$V=\mathbb{C}^2$}, {$W=\mathbb(C)$} with elements {$(t_1,t_2,t_3,t_4,t_5,t_6)\in T$}, {$(u_1,u_2,u_3)\in U$}, {$(v_1,v_2)\in V$}, {$(w_1)\in W$}. The restriction functor {$\mathbf{Res}^G_H$}. Definition and example. Fix field {$K$} as above. Let {$H$} be a subgroup of {$G$} with index {$n=[G:H]$}. The restriction functor {$\mathbf{Res}^G_H: \mathbf{Rep}_G\rightarrow \mathbf{Rep}_H$} sends the pair {$V$} and {$\phi_V:G\rightarrow GL(V,K)$} to the pair {$V$} and {$\phi_V|_H:H\rightarrow GL(V,K)$} where {$\phi_V|_H(h)=\phi_V(h)$} for all {$h\in H$}. Which is to say, the restriction functor simply restricts the representation {$\phi_V$} to be defined on {$H\subset G$}. Similarly, given a morphism (an equivariant map) {$\beta:(V,\phi_V)\rightarrow (W,\psi_W)$} with {$\beta \circ \phi_V(g)=\psi_W(g)\circ \beta$} for all {$g\in G$}, the functor {$\mathbf{Res}^G_H$} simply maps {$\beta$} to {$\beta$} and the diagram commutes, as before, for all {$h\in H\subset G$}. See: Restricted representation. Given the representation {$\mathbf{SR1}$} of {$G=S_3$}: {$ () \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$ (12) \rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}$}, {$ (13) \rightarrow \begin{pmatrix} 0 & -1 \\ -1 & 0 \end{pmatrix}$}, {$ (23) \rightarrow \begin{pmatrix} 1 & 0 \\ 1 & -1 \end{pmatrix}$}, {$ (123) \rightarrow \begin{pmatrix} 0 & -1 \\ 1 & -1 \end{pmatrix}$}, {$ (132) \rightarrow \begin{pmatrix} -1 & 1 \\ -1 & 0 \end{pmatrix} $} For the subgroup {$H=\{(),(12)\}$} there is the restricted representation {$\mathbf{Res}^G_H(\mathbf{SR1})$}: {$ () \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$ (12) \rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} $} Similarly, for {$H$} we also have the restricted representation {$\mathbf{Res}^G_H(\mathbf{SR2})$}: {$ () \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$ (12) \rightarrow \begin{pmatrix} -1/2 & \sqrt{3}/2 \\ \sqrt{3}/2 & 1/2 \end{pmatrix} $} And the functor {$\mathbf{Res}^G_H$} takes us from the equivariant map {$\alpha_{12}: \mathbf{SR1} \rightarrow \mathbf{SR2}$} to the analogous equivariant map {$\mathbf{Res}^G_H(\alpha_{12}): \mathbf{Res}^G_H(\mathbf{SR1}) \rightarrow \mathbf{Res}^G_H(\mathbf{SR2})$}. {$\alpha_{12}= \mathbf{Res}^G_H(\alpha_{12}) = C\begin{pmatrix} 2 & 0 \\ 1 & \sqrt{3} \end{pmatrix} $} ANSWER: Is the restriction functor invertible? Definition of the induced representation {$\textrm{Ind}^G_H$}. The induction functor is defined in terms of the left cosets of {$H$} in {$G$}. The left cosets of {$H$} in {$G$} are the sets of the form {$gH$} where {$g\in G$}. An example is the twelve-hour clock {$\mathbb{Z}_{12}$} and its subgroup {$\{0,6\}$} which has left cosets {$\{0,6\},\{1,7\},\{2,8\},\{3,9\},\{4,10\},\{5,11\}$}. Let us consider some basic facts about the left cosets and establish some notation: Fix {$g\in G$}. Left multiplication by {$g$} defines a map from {$H$} to {$gH$} which is surjective. The map is also injective for if {$h_1,h_2\in G$} and {$gh_1=gh_2$}, then {$g^{-1}gh_1=g^{-1}gh_2$} and so {$h_1=h_2$}. Thus the map is bijective and the domain and the codomain have the same cardinality: {$|H|=|gH|$}. Given two cosets {$g_1H$} and {$g_2H$}, consider whether {$g_1^{-1}g_2$} is in {$H$}. If it is not, then {$g_1H \cap g_2H = \emptyset $}, for otherwise there exist {$h_1, h_2 \in H$} such that {$g_1h_1 = g_2h_2$} and so {$g_1^{-1}g_2 = h_1h_2^{-1} \in H$}, with a contradiction. And if {$g_1^{-1}g_2 = h$} is in {$H$}, then {$g_2 = g_1h$} and so {$g_2H=g_1hH=g_1H$}, which is to say, {$g_1H=g_2H$} is the same left coset. Consequently, the left cosets partition {$G$} into a set of equivalence classes. We can select one element {$g_i$} from each coset. Let {$I$} be the index set for these elements and the left cosets they represent. Then the set {$\{g_i | i\in I\}$} is called a set of left coset representatives. And {$G$} is the disjoint union {$\bigsqcup_{i\in I}g_iH$} of the left cosets. Every element {$g \in G$} can be written uniquely as {$g=g_ih_g$} for some {$i\in I$} and {$h_g\in H$}. And, of course, every such expression is an element of {$G$}. We can chose a different set of representatives but the partition remains the same. Note that multiplication on the left by {$g\in G$} sends left cosets to left cosets. For if {$h_1\in H$} and {$gg_i=g_jh_1$} then likewise {$gg_ih_2=g_jh_1h_2$} for all {$h_2\in H$}. Thus {$gg_iH=g_jH$}. Furthermore, if {$gg_iH=g_jH$} and {$gg_kH=g_jH$} then {$gg_iH=gg_kH$}, {$g^{-1}g_iH=g^{-1}g_kH$}, {$g_iH=g_kH$}. Finally, every element of {$G$} is an element of some coset of {$H$} regardless of the cardinalities. Thus multiplication on the left by {$g\in G$} acts as a permutation on the left cosets of {$H$} in {$G$}. There exists a permutation {$\sigma\in S_I$} such that {$gg_iH = g_{\sigma(i)}H$} for all {$i\in I$}. We can chose a different set of representatives but the action of {$g$} on the cosets remains the same. In that sense, the labels may change but the permutation remains the same. If {$G$} is a finite group, then {$|I]$} is finite and we call it the index of {$H$} in {$G$} and denote it {$[G:H]$}. In what follows, we write {$[G:H]=n$}. We have {$|G]=[G:H]|H|$}, which is Lagrange's theorem. The induction functor is based on the action of {$g\in G$} upon any element {$g_kh_l\in G$} thus decomposed. The crucial equation is: {$$g(g_k(h_l))=g_{\sigma(k)}(g_{\sigma(k)}^{-1}gg_kh_l)$$} This equation expresses the action by {$g$} which takes coset {$g_kH$} to coset {$g_{\sigma(k)}H$} and also acts on {$H$} by taking element {$h_l$} to element {$g_{\sigma(k)}^{-1}gg_kh_l$}. We know that {$h=g_{\sigma(k)}^{-1}gg_kh_l$} is in {$H$} by the decomposition of the element {$gg_kh_l=g_{\sigma(k)}h$}. We also know that {$g_{\sigma(k)}^{-1}gg_k = h^{-1}h_l$} is in {$H$}. The action of taking {$h_l$} to {$g_{\sigma(k)}^{-1}gg_kh_l$} is the action of left multiplying {$h_l$} by {$g_{\sigma(k)}^{-1}gg_k$}. Note that the action on the cosets changes the indices (the action manifests knowledge "how") whereas the action on the subgroup adds a modifier (the action manifests knowledge "what"). Given a representation {$\theta$} of {$H$} on {$V$}, we define the vector space {$W=\bigoplus_{k=1}^n V_k $} where each {$V_k$} is an isomorphic copy of {$V$}. The elements of {$W$} are sums {$\sum_{i=1}^nv_i$} where {$v_i\in V_i$}. When the vector space {$W$} is finite dimensional, {$|W|=n|V|$}. The induced representation {$\textrm{Ind}_H^G \theta(g)$} acts on {$W$} with an {$n |V| \times n |V|$} matrix which for each {$k\in I$} places the {$|V|\times |V|$} matrix for {$\theta(g_{\sigma(k)}^{-1}gg_k)$} as the block {$(\sigma(k), k)$} within the {$n\times n$} permutation matrix for {$\sigma(k)$}. We can define the Induced representation most generally as in the Wikipedia article. Given {$g\in G$}, we can uniquely decompose {$gg_k$} as {$gg_k=g_{\sigma(k)}h_k$} where {$h_k\in H$}. Consequently, we may define the induced representation {$\textrm{Ind}_H^G \theta(g)(\sum_{k=1}^nv_k)=\sum_{k=1}^n\theta(h_k)v_{\sigma(k)}$} for all {$v_k\in V_k$}. Note that {$h_k=g_{\sigma(k)}^{-1}gg_k$}. If the vector spaces have finite dimension, then we can define the induced representation in terms of matrices, as in the video by Mahender Singh. Define {$$\dot{\theta_x}=\begin{cases} \theta_x & \text{ if } x\in H \\ 0 & \text{ if } x\notin H\end{cases}$$} The induced representation is given by {$\textrm{Ind}_H^G \theta (g)=(\dot{\theta}_{g_i^{-1}gg_j})$}, which is to say, the {$(i,j)$} block is {$\dot{\theta}_{g_i^{-1}gg_j}$}. Indeed, the unique nonzero block in the {$j$}th row is {$\theta_{g_{\sigma(j)}^{-1}gg_j}$}. It acts by matrix multiplication whereby it reads the contents of the vector space {$V_j$} in {$W$}, multiplies these contents by the matrix {$\theta_{g_{\sigma(j)}^{-1}gg_j}$}, and writes these new contents within the vector space {$V_{\sigma(j)}$} in {$W$}. Recall that {$g_{\sigma(j)}^{-1}gg_j\in H$} and {$\theta$} is a representation of {$H$}. We can think of {$g\in G$} as conglomerating the actions {$t_j=g_{\sigma(j)}^{-1}gg_j$} of {$H$} for all {$j\in I$}. That action {$t_j$} of {$H$} can be thought of as a sequence of three actions of {$G$}. The first action {$g_j$} takes us from {$H$} to {$g_jH$}. The second action {$g$} takes us from {$g_jH$} to {$g_{\sigma(j)}H$}. The third action {$g^{-1}_{\sigma(j)}$} takes us from {$g_{\sigma(j)}H$} back to {$H$}. This is taking place for each row {$j\in I$}. A challenge to consider is to determine, given elements {$t_j\in H$} for one or more {$j\in I$}, how that constrains {$g$}. Another way to define the induced representation is to use the tensor product. Given the representation {$\theta$} of subgroup {$H$} of {$G$} on the vector space {$V$} over the field {$K$}, we can think of {$\theta$} as a module {$V$} over the group ring {$K[H]$}. Define the induced representation {$$\textrm{Ind}^G_H\theta = K[G]\bigotimes_{K[H]}V$$} See: Tensor product of modules. Extension of scalars. and Change of rings. Extension of scalars. Example of an induced representation. Let {$G=S_3$}. Let {$\theta_U$} be the representation of the subgroup {$H=\{(),(12)\}$} given as follows: Here we can verify that {$\begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$} as required. Then the left cosets of {$H$} are {$\{(),(12)\}$}, {$\{(13),(123)=(13)(12)\}$}, {$\{(23),(132)=(23)(12)\}$} (see S2 in S3: Cosets). Note that cycles compose from right to left (see Symmetric group: Multiplication. The action of {$g\in G$} upon these cosets has the following multiplication table: {$$ \begin{matrix} \times & \mathbf{H} & \mathbf{(13)H} & \mathbf{(23)H} \\ \mathbf{()} & H & (13)H & (23)H \\ \mathbf{(12)} & H & (23)H & (13)H \\ \mathbf{(13)} & (13)H & H & (23)H \\ \mathbf{(123)=(13)(12)} & (13)H & (23)H & H \\ \mathbf{(23)} & (23)H & (13)H & H \\ \mathbf{(132)=(23)(12)} & (23)H & H & (13)H \\ \end{matrix} $$} This expresses the permutation {$\sigma$} which appears in the definition. We can furthermore express the map {$\tau$} by indicating the action as to how it affects the element in {$H$}: {$$ \begin{matrix} \times & \mathbf{()} & \mathbf{(12)} & \mathbf{(13)} & \mathbf{(13)(12)} & \mathbf{(23)} & \mathbf{(23)(12)} \\ \mathbf{()} & () & (12) & (13) & (13)(12) & (23) & (23)(12) \\ \mathbf{(12)} & (12) & () & (23)(12) & (23) & (13)(12) & (13) \\ \mathbf{(13)} & (13) & (13)(12) & () & (12) & (23)(12) & (23) \\ \mathbf{(123)=(13)(12)} & (13)(12) & (13) & (23) & (23)(12) & (12) & () \\ \mathbf{(23)} & (23) & (23)(12) & (13)(12) & (13) & () & (12) \\ \mathbf{(132)=(23)(12)} & (23)(12) & (23) & (12) & () & (13) & (13)(12) \\ \end{matrix} $$} In constructing the induced representation, we focus on how an element acts on coset representatives: {$$ \begin{matrix} \times & \mathbf{()} & \mathbf{(13)} & \mathbf{(23)} \\ \mathbf{()} & () & (13) & (23) \\ \mathbf{(12)} & (12) & (23)(12) & (13)(12)\\ \mathbf{(13)} & (13) & () & (23)(12) \\ \mathbf{(123)=(13)(12)} & (13)(12) & (23) & (12) \\ \mathbf{(23)} & (23) & (13)(12) & () \\ \mathbf{(132)=(23)(12)} & (23)(12) & (12) & (13) \\ \end{matrix} $$} We can express these actions with permutation matrices of blocks, as follows, where the blocks of zeroes have been omitted. We have {$\textrm{Ind}_H^G \theta(g)= \begin{pmatrix} \dot{\theta}_{()g()} & \dot{\theta}_{()g(13)} & \dot{\theta}_{()g(23)} \\ \dot{\theta}_{(13)g()} & \dot{\theta}_{(13)g(13)} & \dot{\theta}_{(13)g(23)} \\ \dot{\theta}_{(23)g()} & \dot{\theta}_{(23)g(13)} & \dot{\theta}_{(23)g(23)} \end{pmatrix} $} We may calculate as follows: {$ () \rightarrow \begin{pmatrix} \dot{\theta}_{()} & 0 & 0 \\ 0 & \dot{\theta}_{()} & 0 \\ 0 & 0 & \dot{\theta}_{()} \end{pmatrix} = \begin{pmatrix} 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \end{pmatrix}$}, {$ (12) \rightarrow \begin{pmatrix} \dot{\theta}_{(12)} & 0 & 0 \\ 0 & 0 & \dot{\theta}_{(12)} \\ 0 & \dot{\theta}_{(12)} & 0 \end{pmatrix} = \begin{pmatrix} -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \end{pmatrix}$}, {$ (13) \rightarrow \begin{pmatrix} 0 & \dot{\theta}_{()} & 0 \\ \dot{\theta}_{()} & 0 & 0 \\ 0 & 0 & \dot{\theta}_{(12)} \end{pmatrix} = \begin{pmatrix} & & 1 & 0 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \end{pmatrix}$}, {$ (123) = (13)(12) \rightarrow \begin{pmatrix} 0 & 0 & \dot{\theta}_{(12)} \\ \dot{\theta}_{(12)} & 0 & 0 \\ 0 & \dot{\theta}_{()} & 0 \end{pmatrix} = \begin{pmatrix} & & & & -1 & 1 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \end{pmatrix}$} {$ (23) \rightarrow \begin{pmatrix} 0 & 0 & \dot{\theta}_{()} \\ 0 & \dot{\theta}_{(12)} & 0 \\ \dot{\theta}_{()} & 0 & 0 \end{pmatrix} = \begin{pmatrix} & & & & 1 & 0 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \end{pmatrix}$}, {$ (132) = (23)(12) \rightarrow \begin{pmatrix} 0 & \dot{\theta}_{(12)} & 0 \\ 0 & 0 & \dot{\theta}_{()} \\ \dot{\theta}_{(12)} & 0 & 0 \end{pmatrix} = \begin{pmatrix} & & -1 & 1 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \end{pmatrix}$} And we can double check that matrix multiplication works as required so that {$\textrm{Ind}_H^G \theta((13)) \times \textrm{Ind}_H^G \theta((12)) = \textrm{Ind}_H^G \theta((13)(12))$}. Note that in the symmetric group we need to read cycle products such as {$(13)(12)=(123)$} as composing from right to left because they act on the right. It took me hours to appreciate that! Especially because each cycle is read from left to right. We can compare with the following two row notation: {$\begin{pmatrix} 1 & 2 & 3 \\ \downarrow & \downarrow & \downarrow \\ 3 & 2 & 1 \end{pmatrix} \Leftarrow \begin{pmatrix} 1 & 2 & 3 \\ \downarrow & \downarrow & \downarrow \\ 2 & 1 & 3 \end{pmatrix} = \begin{pmatrix} 1 & 2 & 3 \\ \downarrow & \downarrow & \downarrow \\ 2 & 3 & 1 \end{pmatrix}$} Next, compare with the permutation matrix group. Consider the permutation matrices for {$(13)(12)=(123)$}: {$\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$} The matrix for {$(123)$} acts on the column matrix by fixing the labels and shifting the slots. Thus the the labels are moved: {$x_1$} from the first slot to the second slot, {$x_2$} from the second to the third, and {$x_3$} from the third to the first. {$\begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} x_3 \\ x_1 \\ x_2 \end{pmatrix}$} The matrix for {$(123)$} acts on the row matrix by fixing the slots and relabeling the labels. {$\begin{pmatrix} x_1 & x_2 & x_3 \end{pmatrix} \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} = \begin{pmatrix} x_2 & x_3 & x_1 \end{pmatrix}$} In multiplying matrices, we are acting on a vector on the right hand side: {$\mathbf{Ind}^G_H(\theta_U)(\mathbf{(13)})\cdot \mathbf{Ind}^G_H(\theta_U)(\mathbf{(12)})\cdot \mathbf{v} = \mathbf{Ind}^G_H(\theta_U)(\mathbf{(123)})\cdot \mathbf{v} $}. Thus it is the action on the column that is relevant. The formulation in terms of the tensor product helps to make this clear. Similarly, let {$\phi_U$} bet the representation of the subgroup {$H=\{(),(12)\}$} given as follows: {$ () \rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$ (12) \rightarrow \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} $} Then the induced representation {$\mathbf{Ind}^G_H(\phi_U)$} is given by {$ (12) \rightarrow \begin{pmatrix} 0 & 1 & & & & \\ 1 & 0 & & & & \\ & & & & 0 & 1 \\ & & & & 1 & 0 \\ & & 0 & 1 & & \\ & & 1 & 0 & & \end{pmatrix}$}, {$ (13) \rightarrow \begin{pmatrix} & & 1 & 0 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & & & 0 & 1 \\ & & & & 1 & 0 \end{pmatrix} $} and so on. Note that the restriction functor keeps the vector space the same whereas the induction functor replaces the vector space with the direct sum of {$[G:H]$} copies of it. Definition of the induction functor {$\mathbf{Ind}^G_H$}. As before, {$H$} is a subgroup of {$G$} with index {$n=[G:H]$}. We select a set of coset representatives {$\{g_i | 1\leq i \leq n\}$}. Given a vector space {$V$}, we define {$\bar{V}=\bigoplus_{i=1}^{n}V_i$} where {$V_i\cong V$}. The induction functor {$\mathbf{Ind}^G_H: \mathbf{Rep}_H\rightarrow \mathbf{Rep}_G$} sends the representation {$\theta_V:H\rightarrow GL(V,K)$} of {$H$} on {$V$} to the induced representation {$\textrm{Ind}^G_H(\theta_V):G\rightarrow GL(\bar{V},K)$} of {$G$} on {$\bar{V}$}. The induction functor also sends the morphism (equivariant map) {$\alpha:\psi_U\rightarrow\theta_V$} to the morphism (equivariant map) {$\textrm{Ind}^G_H(\alpha):\textrm{Ind}^G_H(\psi_U)\rightarrow \textrm{Ind}^G_H(\theta_V)$}. We define the latter morphism to be the linear map from {$\bar{V}$} to {$\bar{U}$} given by {$n\times n$} blocks where the blocks on the diagonal are given by the matrix for {$\alpha$} and the other blocks are all zero. Let us verify that {$\beta$} is indeed an equivariant map as required. We are given the commutative diagram for {$\alpha$} and {$h\in H$}: {$$\begin{matrix} & U & \overset{\alpha}\longrightarrow & V \\ h\rightarrow\psi_U(h) & \downarrow & & \downarrow & \theta_V(h)\leftarrow h\\ & U & \overset{\alpha}\longrightarrow & V \end{matrix}$$} For any {$g\in G$}, the linear representation {$\textrm{Ind}^G_H(\psi_U)(g)$} is a matrix of {$n\times n$} blocks. Indeed, it is a permutation matrix of the blocks. The {$(i,j)$} block is zero unless {$g^{-1}_igg_j\in H$}, in which case we may write {$h_{ij}=g^{-1}_igg_j$}. The linear representation {$\textrm{Ind}^G_H(\theta_V)(g)$} is the same permutation of blocks. The only difference is that in the first case the nonzero blocks are {$\psi_{h_{ij}}$} and in the second case they are {$\theta_{h_{ij}}$}. We have that {$\alpha\circ\psi_{h_{ij}}=\theta_{h_{ij}}\circ\alpha$}. Analogously, we have {$\textrm{Ind}^G_H(\alpha)\circ\psi_{h_{ij}}=\theta_{h_{ij}}\circ\textrm{Ind}^G_H(\alpha)$} we define the commutative diagram for {$\textrm{Ind}^G_H(\alpha)$} and {$g\in G$} {$$\begin{matrix} & \bar{U} & \overset{\textrm{Ind}^G_H(\alpha)}\longrightarrow & \bar{V} \\ g\rightarrow\textrm{Ind}^G_H(\psi_U)(g) & \downarrow & & \downarrow & \textrm{Ind}^G_H(\theta_V)(g)\leftarrow g\\ & \bar{U} & \overset{\textrm{Ind}^G_H(\alpha)}\longrightarrow & \bar{V} \end{matrix}$$} It sends identity equivariant maps {$\textrm{id}_V$} in {$\mathbf{Rep}_H$} to identity equivariant maps {$\textrm{id}_W$} in {$\mathbf{Rep}_G$}. [SHOW THIS] It sends the composition of maps in {$\mathbf{Rep}_H$} to the composition of maps in {$\mathbf{Rep}_G$}. [SHOW THIS] We can define the induction functor more abstractly as per Tammo tom Dieck. [WRITE THIS OUT] We can define the induction functor even more abstractly in terms of tensor products. [WRITE THIS OUT] Example of an equivariant map between induced representations. [CORRECT THIS] We can calculate the family of equivariant maps that take us from {$\theta_U$} to {$\phi_U$} defined in the section above. {$ \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix}$} {$b_{11}(-u_1 + u_2) + b_{12}u_2 = b_{21}u_1 + b_{22}u_2$} {$-b_{11}u_1 + (b_{11} + b_{12})u_2 = b_{21}u_1 + b_{22}u_2$} {$b_{21}=-b_{11}$} and {$b_{22} = b_{11} + b_{12}$} We determine the family {$ \begin{pmatrix} b_{11} & b_{12} \\ -b_{11} & b_{11}+b_{12} \end{pmatrix}$}. Similarly, we calculate the family of equivariant maps {$\mathbf{Ind}^G_H(\beta)$} that take us from {$\mathbf{Ind}^G_H(\theta_U)$} to {$\mathbf{Ind}^G_H(\phi_U)$}. Comparing matrices for each {$g\in G$}, we see that for every block there is a permutation thanks to which we make the same calculation, with the index shifted appropriately: {$ \begin{pmatrix} & & & & \\ & b_{i+1\;j+1} & b_{i+1\;j+2} & \\ & b_{i+2\;j+1} & b_{i+2\;j+2} & \\ & & & \end{pmatrix} \begin{pmatrix} & & & \\ & -1_{j+1\;k+1} & 1_{j+1\;k+2} & \\ & 0_{j+2\;k+1} & 1_{j+2\;k+2} & \\ & & \end{pmatrix} \begin{pmatrix} \\ x_{k+1} \\ x_{k+2} \\ \; \end{pmatrix} = \begin{pmatrix} & & & \\ & 0_{i+1\;j+1} & 1_{i+1\;j+2} & \\ & 1_{i+2\;j+1} & 0_{i+2\;j+2} & \\ & & \end{pmatrix} \begin{pmatrix} & & & & \\ & b_{j+1\;k+1} & b_{j+1\;k+2} & \\ & b_{j+2\;k+1} & b_{j+2\;k+2} & \\ & & & \end{pmatrix} \begin{pmatrix} \\ x_{k+1} \\ x_{k+2} \\ \; \end{pmatrix} $} {$b_{i+2\; j+1}=-b_{i+1\; j+1}$} and {$b_{i+2\; j+2} = b_{i+1\; j+1} + b_{i+1\; j+2}$} much as above, just with the index shifted. But furthermore, we can show that each block is the same. Simply consider the permutation which contains the two blocks with the same content. (I should show the calculation!) The upshot is that the equivariant maps {$\mathbf{Ind}^G_H(\beta)$} are given by matrices that consist of {$n\times n$} blocks of the matrix for {$\beta$}. In our case the equivariant map is a {$6\times 6$} matrix consisting of {$3\times 3$} blocks of size {$2\times 2$}: {$ \begin{pmatrix} b_{11} & b_{12} & b_{11} & b_{12} & b_{11} & b_{12}\\ -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12}\\ b_{11} & b_{12} & b_{11} & b_{12} & b_{11} & b_{12}\\ -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12}\\ b_{11} & b_{12} & b_{11} & b_{12} & b_{11} & b_{12}\\ -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} & -b_{11} & b_{11}+b_{12} \end{pmatrix}$}. Thus we see how the functor {$\mathbf{Ind}^G_H$} maps morphisms, which is to say, equivariant maps. (I should show how the functor satisfies the composition law and the identity condition.) I should also note and show that, in general, the vector spaces {$U$} and {$V$} can be of different dimension and so the equivariant map may be expressed by a rectangular {$n\cdot \textrm{dim} U \times n\cdot \textrm{dim} V$} matrix. The contragredient representation. Given a representation {$\rho : G\rightarrow GL(V)$} with character {$\chi$}, let {$V'$} be the dual of {$V$}, i.e., the space of linear forms on {$V$}. Let {$<x,x'>$} denote the value of the form {$x'$} at {$x$}. The contragredient representation {$\rho' : G\rightarrow GL(V')$} (or dual representation) of {$\rho$} is the unique linear representation such that {$<\rho_s x, \rho'_s x'> = <x,x'>$} for {$s\in G$}, {$x\in V$}, {$x'\in V'$}. The character of {$\rho'$} is {$\chi^*$}. Let {$\rho_1 : G\rightarrow GL(V_1)$} and {$\rho_1 : G\rightarrow GL(V_2)$} be two linear representations with characters {$\chi_1$} and {$\chi_2$}. Let {$W=\textrm{Hom}(V_1,V_2)$}, the vector space of linear mappings {$f:V_1 \rightarrow V_2$}. For {$s\in G$} and {$f\in W$} let {$\rho_s f = \rho_{2,s}\circ f\circ \rho^{-1}_{1,s}$}; so {$\rho_s f\in W$}. This defines a linear representation {$\rho : G\rightarrow GL(W)$}. Its character is {$\chi^*_1\cdot \chi_2$}. This representation is isomorphic to {$\rho_1'\bigotimes\rho_2$}. Pairing an equivariant map in {$\mathbf{Rep}_G(\mathbf{Ind}^G_H(\theta_U),\phi_V)$} and an equivariant map in {$\mathbf{Rep}_H(\theta_U, \mathbf{Res}^G_H(\phi_V))$}. The adjunction {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$} pairs equivariant maps in {$\textrm{Rep}_G(\textrm{Ind}^G_H(\theta_U),(\phi_V))$} with equivariant maps in {$\textrm{Rep}_H((\theta_U), \textrm{Res}^G_H(\phi_V))$} as follows. Let {$f:U\rightarrow V$} be an equivariant map such that {$f(\theta_t u)=\phi_t f$} for all {$t\in H$}, {$w\in W$}. Let {$\{g_i | i\in I\}$} be a set of coset representatives of {$H$} in {$G$}. Let {$U_i$} be the copy of {$U$} associated with the coset {$g_iH$}. Set {$U_1=U$} and {$g_1H=H$}. FIX: (Explain surjectivity). We have {$\textrm{Ind}^G_H(\theta_U)(g_i):U\rightarrow U_i$} for all {$i\in I$}. Note that if {$x_i\in U_i = \textrm{Ind}^G_H(\theta_U)(g_i)U$}, then {$\textrm{Ind}^G_H(\theta_U)(g_i)^{-1}U = \textrm{Ind}^G_H(\theta_U)(g_i^{-1})U \subset U$}. Given {$x\in\bigoplus_{i\in I}U_i$}, write {$x=\sum_{i\in I}x_i$} where {$x_i\in U_i$}. Define {$F(x_i)=\phi_{g_i}f(\textrm{Ind}^G_H(g_i^{-1})x_i)$}. By linearity, this defines {$F:\bigoplus_{i\in I}U_i\rightarrow V$}. Given an {$H$}-morphism {$\phi:V\rightarrow\textrm{Res}^G_H W$}, define a {$G$}-morphism {$\Phi:\textrm{ind}^G_H\rightarrow W$} which sends the summand {$gH\times_HV$} by {$(g,v)\rightarrow\mapsto g\cdot\phi(v)$}. Note that {$\phi(v)\in W$} and that the representation {$W$} defines the action {$g\cdot\phi(v)$}. Note also that any other representative of the summand such as {$(gh^{-1},hv$} where {$h\in H$} leads to the same value {$gh^{-1}\phi(hv)=g\phi(v)$} since {$\phi$} is an {$H$}-morphism. {$W$} is a specific representation, which is why {$\Phi$} is determined uniquely. This uniqueness in terms of inner structure matches the uniqueness that induction expresses in terms of external relations. The implicit information lost by the restriction functor is the same as the explicit information constructed by the induction functor. ANSWER: Given representation {$\phi(v):V\rightarrow\textrm{Res}^G_H W$} and {$g\in G$}, is the representation {$g\cdot\phi(v)$} uniquely defined? ANSWER: Is {$\textrm{Res}^G_H W$} an invertible functor? ANSWER: How to understand the extension {$\textrm{Res}^G_H W\rightarrow\textrm{Res}^G_H U$}? An example of an equivariant map in {$\mathbf{Rep}_G(\mathbf{Ind}^G_H(U,\theta_U),(V,\phi_V))$} and its pair in {$\mathbf{Rep}_H((U,\theta_U), \mathbf{Res}^G_H(V,\phi_V))$}. Let {$G=S_3$} with subgroup {$H=\{(),(12)\}$}. Let {$U=\mathbb{C}^2$} and {$V=\mathbb{C}^3$}. Let {$\theta_U$} be the representation of {$H$} that maps {$\mathbf{()}\rightarrow \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}$}, {$\mathbf{(12)}\rightarrow \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}$} And let {$\phi_V$} be the representation of {$G$} that is given by the permutation representation consisting of {$3\times 3$} permutation matrices. Then the induced representation {$\mathbf{Ind}^G_H(U,\theta_U)$} of {$G$} acts on {$U \bigoplus U \bigoplus U =\mathbb{C}^6=\mathbb{C}^2\bigotimes\mathbb{C}^3$} whereas the restricted representation {$\mathbf{Res}^G_H(V,\phi_V))$} of {$H$} acts on {$V=\mathbb{C}^3$}. We can calculate the equivariant map {$\alpha = \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{pmatrix}$} which satisfies the commutative map for representations of {$H$}: {$$\begin{matrix} & U & \overset{\alpha}\longrightarrow & V \\ \theta_U(h) & \downarrow & & \downarrow & \mathbf{Res}^G_H(V,\phi_V)(h) \\ & U & \overset{\alpha}\longrightarrow & V \end{matrix}$$} Given {$u=(u_1,u_2)$} and {$h\in H$} we have equations of the form {$\alpha \cdot \theta_U(h) \cdot u = \phi_V(h) \cdot \alpha \cdot u$}. The equation for {$h=\mathbf{()}$} simply says that {$\alpha u = \alpha u$}. The equation for {$h=\mathbf{(12)}$} is: {$$ \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{pmatrix} \begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \\ a_{31} & a_{32} \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} $$} Multiplying out, this yields {$$\begin{pmatrix} -\alpha_{11} u_1 + \alpha_{11} u_2 + \alpha_{12} u_2 & -\alpha_{21} u_1 + \alpha_{21} u_2 + \alpha_{22} u_2 & -\alpha_{31} u_1 + \alpha_{31} u_2 + \alpha_{32} u_2 \end{pmatrix} = \begin{pmatrix} \alpha_{21} u_1 + \alpha_{22} u_2 & \alpha_{11} u_1 + \alpha_{12} u_2 & \alpha_{31} u_1 + \alpha_{32} u_2 \end{pmatrix} $$} Simplifying, we have equations {$\alpha_{21}=-\alpha_{11}$}, {$\alpha_{22}=\alpha_{11}+\alpha_{12}, \alpha_{31}=-\alpha_{31}=0, \alpha_{32}=\alpha_{32}$}. And so the six variables reduce to three variables. Fixing the values of {$a_{11}$}, {$a_{12}$}, {$a_{32}$} yields an equivariant map. Thus we have a family of maps given by: {$\alpha = \begin{pmatrix} a_{11} & a_{12} \\ -a_{11} & a_{11}+a_{12} \\ 0 & a_{32} \end{pmatrix}$} Similarly, we can calculate the equivariant map {$\gamma = \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix}$} which for every {$g\in G$} satisfies the commutative map: {$$\begin{matrix} & U \bigoplus U \bigoplus U & \overset{\gamma}\longrightarrow & V \\ \mathbf{Ind}^G_H(\theta_U, U)(g) & \downarrow & & \downarrow & \phi_V(g) \\ & U \bigoplus U \bigoplus U & \overset{\gamma}\longrightarrow & V \end{matrix}$$} {$ \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} $} {$ \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} $} {$ \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & 1 & 0 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \\ & & & & -1 & 1 \\ & & & & 0 & 1 \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} $} {$ \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & -1 & 1 & & \\ & & 0 & 1 & & \\ & & & & 1 & 0 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} $} {$ \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & & & 1 & 0 \\ & & & & 0 & 1 \\ & & -1 & 1 & & \\ & & 0 & 1 & & \\ 1 & 0 & & & & \\ 0 & 1 & & & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} $} {$ \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} & & & & -1 & 1 \\ & & & & 0 & 1 \\ -1 & 1 & & & & \\ 0 & 1 & & & & \\ & & 1 & 0 & & \\ & & 0 & 1 & & \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} = \begin{pmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} c_{11} & c_{12} & c_{13} & c_{14} & c_{15} & c_{16} \\ c_{21} & c_{22} & c_{23} & c_{24} & c_{25} & c_{26} \\ c_{31} & c_{32} & c_{33} & c_{34} & c_{35} & c_{36} \end{pmatrix} \begin{pmatrix} u_{11} \\ u_{12} \\ u_{21} \\ u_{22} \\ u_{31} \\ u_{32} \end{pmatrix} $} Solving these matrix equations I arrive at ... (need to redo!) The solution should be: {$$ \gamma = \begin{pmatrix} c_{11} & c_{12} & 0 & c_{32} & c_{11} & c_{12} \\ -c_{11} & c_{11}+c_{12} & -c_{11} & c_{11}+c_{12} & 0 & c_{32} \\ 0 & c_{32} & c_{11} & c_{12} & -c_{11} & c_{11}+c_{12} \end{pmatrix} $$} The morphisms can be thought of as machines. This says that looking through the front side of machines and through the back side of machines is the same information. The HomSet definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$} [Write up the HomSet definition of the adjunction.] The online book Tammo ton Dieck. Representation Theory. 2009 is very helpful because in Proposition 4.1.2 (page 51) it makes explicit the bijection between the HomSets. As regards finite-dimensional representations, I find it helpful to make explicit the dimensions of the matrices. An example illustrating the HomSet definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$} Consider the trivial group {$H=()$} as a subgroup of {$G=(12)$}. {$ \{\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \end{pmatrix}\} $} {$\begin{pmatrix} a_{11} & a_{12} & 0 & 0 \\ 0 & 0 & a_{11} & a_{12} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}\}$} {$\begin{pmatrix} \phi_{11} & -\phi_{11}+\phi_{21} \\ \phi_{22} & \phi_{21} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix},\begin{pmatrix} -1 & 1 \\ 0 & 1 \end{pmatrix}\}$} {$\begin{pmatrix} 0 & b_{12} \\ 0 & b_{22} \\ 0 & b_{32} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix},\begin{pmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{pmatrix}\}$} in {$\textrm{Rep}_G$} {$\textrm{Ind}^G_H \rho_{W'}$} {$ \overset{\textrm{Ind}^G_H\alpha}{\longrightarrow} $} {$\textrm{Ind}^G_H \rho_W$} {$\overset{\Phi}{\longrightarrow}$} {$\theta_V$} {$\overset{\beta}{\longrightarrow}$} {$\theta_{V'}$} {$\Uparrow $} {$\Uparrow $} {$\Downarrow $} {$\Downarrow $} in {$\textrm{Rep}_H$} {$\rho_{W'}$} {$\overset{\alpha}{\longrightarrow} $} {$\rho_W$} {$\overset{\Phi\circ i^G_H}{\longrightarrow}$} {$\textrm{Res}^G_H\theta_V$} {$\overset{\textrm{Res}^G_H\beta}{\longrightarrow}$} {$\textrm{Res}^G_H\theta_{V'}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\}$} {$\begin{pmatrix} a_{11} & a_{12}\end{pmatrix}$} {$\{\begin{pmatrix} 1 \end{pmatrix}\}$} {$\begin{pmatrix} \phi_{11} \\ \phi_{22}\end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}\}$} {$\begin{pmatrix} 0 & b_{12} \\ 0 & b_{22} \\ 0 & b_{32} \end{pmatrix}$} {$\{\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}\}$} This example illustrates that in this case the bijection between the HomSet adjunction maps a matrix of independent entries {$a_{11}$}, {$a_{21}$} to a matrix that depends on these entries. [IS THIS BIJECTION ONTO ALL POSSIBLE EQUIVARIANT MORPHISMS This example illustrates how the induction functor is not a full functor. The equivariant morphisms from {$\textrm{Ind}\rho_{W'}$} to {$\textrm{Ind}\rho_{W}$} are the matrices of the form {$\begin{pmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{13} & a_{14} & a_{11} & a_{12} \end{pmatrix}$}. The image of the induction functor gives only those for which {$0=a_{13}=a_{14}$}. This example also illustrates how the restriction functor is not a full functor, which is to say, it is not surjective. Note that the equivariant morphisms from {$I_2$} to {$I_3$} are the matrices of the form {$\begin{pmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \\ b_{31} & b_{32} \end{pmatrix}$}. But the image of the restriction functor gives only those for which {$0=b_{11}=b_{21}=b_{31}$}. The Triangle Equalities definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$} The Universal Mapping Property definition of {$\mathbf{Ind}^G_H \dashv \mathbf{Res}^G_H$} Show that {$\mathbf{Res}^G_H \dashv \mathbf{Ind}^G_H$} when G and F are finite groups This is known as coinduction. In the case of finite groups, induction and coinduction coincide. Notes for 'Representations of Finite Groups' Iordan Ganev Relates induction and coinduction. https://mathoverflow.net/questions/1534/induction-and-coinduction-of-representations https://people.math.rochester.edu/faculty/doug/otherpapers/DWilson-slice.pdf https://ncatlab.org/nlab/show/induced+representation https://mathoverflow.net/questions/132272/wrong-way-frobenius-reciprocity-for-finite-groups-representations Go through the three definitions of adjunction to show that {$\mathbf{Res}^G_H \dashv \mathbf{Ind}^G_H$}. Understand the adjunction that relates extension of scalars and restriction of scalars. Understand the adjoint string that relates change of rings Understand Shapiro's lemma The induced representation divides the group action into two actions: a permutation externally on the cosets and an action internally on the subgroup elements. How does it relate external relationships and internal structure? How does it divide one action (the adjoint of restriction) into two actions? Explanation to study Consider in what sense Cramer's rule express inverse of a matrix in terms of permutations and how that might relate to the adjunction between restriction and induction. Joshua Wong. Representation Theory with a Perspective from Category Theory Joshua Wong. Representation Theory with a Perspective from Category Theory. Discussion of adjunction of induction and restriction and restriction and extension of scalars. Shapiro's lemma Extension of Frobenius reciprocity. Green functors Induction-restriction is an example. https://en.m.wikipedia.org/wiki/Brauer%27s_theorem_on_induced_characters Understanding how the induced representation of S3 under S2 with the standard representation works Find all homomorphisms S3→Z2⊕Z2⊕Z3 See also Daphne Kao. Representations of the Symmetric Group. for Work through the example for {$D_6$}. See: here Keisti - Įkelti - Istorija - Spausdinti - Naujausi keitimai - Šis puslapis paskutinį kartą keistas January 12, 2022, at 05:27 PM
CommonCrawl
Journal of Biosystems Engineering ISSN:1738-1266(Print) 2234-1862(Online) 바이오시스템공학 Journal of Biosystems Engineering. December 2018. 255-262 https://doi.org/10.5307/JBE.2018.43.4.255 Experimental Study of the Dynamic Characteristics of Rubber Mounts for Agricultural Tractor Cabin Kyujeong Choi1 Jooseon Oh1 Davin Ahn1 Young-Jun Park12* Sung-Un Park3 Heung-Sub Kim4 1Department of Biosystems & Biomaterials Science and Engineering, Seoul National University, Seoul, 08826, Korea 2Research Institute of Agriculture and Life Sciences, College of Agriculture and Life Sciences, Seoul National University, Seoul, 08826, Korea 3Research and Development Institute, Tongyang Moolsan Co., Ltd. Gongju, 32530, Korea 4Department of Smart Industrial Machine Technologies, Korea Institute of Machinery and Materials, Daejeon, 34103, Korea Purpose: To obtain the dynamic characteristics (spring stiffness and damping coefficient) of a rubber mount supporting a tractor cabin in order to develop a simulation model of an agricultural tractor. Methods: The KS M 6604 rubber mount test method was used to test the dynamic characteristics of the rubber mount. Of the methods proposed in the standard, the resonance method was used. To perform the test according to the standard, a base excitation test device was constructed and the accelerations were measured. Results: Displacement transmissibility was measured by varying the frequency from 3–30 Hz. The vibration transmissibility at resonance was confirmed, and the dynamic stiffness and damping coefficient of the rubber mount were obtained. The front rubber mount has a spring constant of 1247 N/mm and damping ratio of 3.27 Ns/mm, and the rear rubber mount has a spring constant of 702 N/mm and damping ratio of 1.92 Ns/mm. Conclusions: The parameters in the z-direction were obtained in this study. In future studies, we will develop a more complete tractor simulation model if the parameters for the x- and y-directions can be obtained. Base excitation system Dynamic characteristics Rubber mount Tractor simulation model Tractor and rubber mount used in study Theory of dynamic system Experimental condition Problems caused by vibrations over extended periods of time can result in hazardous working conditions for tractor drivers, and can cause them physical or health problems. Vibrations are an ongoing problem that have needed continuous attention in the field of agricultural machinery, and numerous technical innovations and performance improvements are required. To reduce vibration in an agricultural tractor, a rubber mount is typically installed between the cabin and the cabin mount frame. However, when the rubber mount does not perform satisfactorily, an excessive ride vibration is generated. Chung et al. (2017) developed a simulation model that could predict the ride vibration of a tractor, and it has been actively used in studies to propose optimal design values for factors affecting ride vibration. However, further studies on the results of this study are required because tests to date were based on assumptions regarding the dynamic characteristics of rubber mounts. To predict ride vibration using a simulation model, the spring stiffness and damping coefficient of the seat, rubber mount, and tire affecting the vibration must be accurately determined. Information regarding the dynamic characteristics of the rubber mount are not provided by manufacturers, and only limited studies have been conducted regarding the dynamic characteristics. Methods of determining the dynamic characteristics of the rubber mount include modeling techniques and tests. Lee et al. (2013) compared the results of the dynamic analysis using Abaqus and the results from non- resonance methods on a single piece of rubber, and reported the validity of the analysis. However, there were numerous values with error rates greater than 20% between the analysis and test results. Therefore, it is difficult to determine whether the analysis results for the rubber mount is accurate in all cases. To improve the performance of the rubber mount as an insulator, Choi et al. (2018) performed shape optimization of the rubber mount. The properties of the rubber were analyzed using a rubber tensile test, and a finite element analysis was also conducted. Analysis of the rubber mount with the optimum design indicated a 35% vibration insulation effect compared to the initial rubber mount. Cho et al. (1999) proposed a direct system method to directly determine the dynamic characteristics of rubber mounts using excitation and response vectors and excitation frequencies obtained from modal tests. However, the direct system determination method is relatively accurate only when system behavior in the frequency band of interest is applied to a rigid body motion system, therefore, there are limits to how it responds to the dynamic characteristics of rubber mounts. The results of study indicated that spring stiffness included errors of up to 14.8% and that damping coefficients included errors up to 8.7%. Hur et al. (2004) conducted static and dynamic tests using rubber specimens for rubber mounts supporting automotive transmissions and reported superelastic and viscoelastic properties. The rubber mount was analyzed for comparison with test results and to confirm the validity of the study. Yim et al. (2000) developed a program that enabled the dynamic analysis and optimal design of automotive engine mounts. Using this analysis, it was possible to investigate the validity of a designed system and study the static mount, vibration mode, and frequency response analyses of an engine mount system. Kwon et al. (2001) used experimental methods to determine the dynamic characteristics of rubber mounts that depended on the deformation amplitude and frequency of rubber mounts subjected to compressive and shear loads. They obtained the dynamic and static characteristics for all three axes, and validated their results by comparing them with analytical results from the finite element method. As can be seen, only limited studies have been conducted to confirm the dynamic characteristics of rubber mounts for agricultural tractors. In addition, as the dynamic characteristics of rubber mounts are unknown, static test results or methods that assume appropriate values for parameters of dynamic characteristics are used, resulting in inaccurate results. Therefore, this study, by determining the dynamic characteristics, is innovative in that it can improve the accuracy of tractor simulation models. Studies have been conducted in the automotive field on rubber mounts supporting engines and transmissions, however, these studies, to confirm dynamic characteristics, were conducted using rubber specimens. As the rubber mounts supporting the cabins of agricultural tractors are a combination of steel plate and rubber, it is difficult to apply research methods utilizing rubber specimens. In this study, the dynamic characteristics (spring stiffness, damping coefficient) of a rubber mount supporting a tractor cabin were determined by using the resonance method presented in KS M 6604. Through this study, we expect to be able to construct an accurate simulation model for ride vibration prediction. This study analyzed the rubber mount used in the 107 kW domestic tractor. The specifications of the tractor and cabin are presented in Table 1. The rubber mount used to measure the dynamic characteristics is shown in Figure 1. The rubber mount for tractors comprises a base plate, rubber, and a pipe used to fasten the bolt. Table 1. Tractor specifications Power 106.65 kW Total Weight 5655 kg Cabin Weight (Front) 265.1 kg Cabin Weight (Rear) 292.4 kg Rubber mount front view (a), top view (b), its individual components (c), and cross-section view (d). The tractor cabin is supported by four rubber mounts. Typically, rubber mounts with different physical properties are used for the front and rear, and the same rubber mounts are installed on the left- and right-hand sides. The reason for this is that the left and right weights of the cabin are similar, but the front and rear weights differ. Figure 2 shows the rubber mounts installed in the tractor. Location of rubber mounts (left), and detailed view of rear rubber mount (right). As a method of confirming the dynamic characteristics of a rubber mount, the KS M 6604 standard proposes a resonance method and a non-resonance method. The same test criteria are used for both methods, and are presented in Table 2. Table 2. Test conditions in KS M 6604 standard Test Temperature (℃) 20 Test Frequency (Hz) 10 Average Strain (%) 0 Shear Strain Amplitude (%) 1.0 The test frequency is only applicable to the non- resonance method. When a test using the resonance method is conducted, the test frequency becomes the resonance frequency. Non-resonance method To confirm the dynamic characteristics of a rubber mount using a non-resonance method, tests are typically performed by applying load to the rubber mount and determining the deformation resulting from the load. First, construct a load–strain curve with the test results and draw a rectangle circumscribing this curve. Figure 3 shows the circumscribed rectangle on a graph. The x-axis is load and the width corresponds to 2x0, and the y-axis is strain and the height corresponds to 2P0. Load–strain curve. The area 2 W enclosed by the rectangle and area 𝛥W enclosed by the load–deformation curve are then calculated and applied to Eq. (1)–(3) to calculate the spring constant. $$\left|k^\ast\right|=\frac{P_o}{x_o}=\overline{BC}/\overline{AB}$$ (1) $$sin\delta\mathit=\left(\mathit2\mathit/\mathit\pi\right)\left(\mathit\triangle\mathit W\mathit/\mathit W\right)\mathit=\overline{\mathit H\mathit'\mathit\;\mathit H}/\overline{\mathit A\mathit B}\mathit=\overline{\mathit J\mathit'\mathit\;\mathit J}/\overline{\mathrm{BC}}$$ (2) $$k=\left|k\ast\right|cos\delta\mathit=\left|\mathit k\mathit\ast\right|\sqrt{\mathit1\mathit-\mathit{sin}^\mathit2\mathit\delta}$$ (3) where P0: half the height of circumscribed rectangle, m x0: half the width of circumscribed rectangle, m 𝛥W: area enclosed by load–strain curve, m2 W: half the area of circumscribed rectangle, m2 𝛿: loss angle, rad k*: absolute spring stiffness, N/m k: spring stiffness, N/m Resonance method This chapter was inspired by the theory based on book of Inman et al. (2014). To confirm the dynamic characteristics of a rubber mount using the resonance method, the resonance curve is recorded and the base excitation amplitude 𝜉0 of the forced displacement is measured. The mass amplitude x0 is then measured by varying the frequency while keeping 𝜉0 constant. When x0 is at its maximum, it is referred to as the resonance frequency, and the amplitudes measured at this time are shown in Figure 4. Frequency–amplitude curve. The vertical ride vibration model for tractors can be simplified by a system that excites movement at the base, the so-called base excitation system. The simplified vibration model can be expressed as shown in Figure 5. Displacement at the base is given by y(t) and the displacement of the cabin placed on the rubber mount is x(t). The displacement of the base is defined in Eq. (4). Model of base excitation system. $$\mathrm y\left(\mathrm t\right)={\mathrm{Ysinω}}_\mathit bt$$ (4) where Y: amplitude of base excitation, Hz 𝜔b: frequency of base excitation, Hz The equation of motion of the system shown in Figure 4 can be expressed as follows: $$\mathrm m\ddot{\mathrm x}+c\left(\dot x-\dot y\right)+k\left(x-y\right)=0$$ (5) where m: mass, kg c: damping coefficient, kg/s Equation (4) is substituted into Eq. (5) as follows: $$m\ddot x+c\dot x+kx=cY\omega_b\cos\omega_bt+kY\sin\omega_bt$$ (6) Equation (6) represents a spring-mass-damper system with two harmonic inputs. The solution is obtained using linear equations of motion. The solution is found by adding the particular solution xp(1), which is obtained by assuming the input cYωbcosωbt, and the particular solution xp(2), which is obtained by assuming the input kYsinωbt. Equation (7) is obtained by dividing both sides of Eq. (6) by m and summarizing them. The angular natural frequency, 𝜔n and damping ratio, 𝜁 are used in Eq. (7), and these parameters are calculated by Eq. (8) and (9), respectively. $$\ddot x+2\zeta\omega_n\dot x+\omega_n^2x=2\zeta\omega_n\omega_bY\cos\omega_bt+\omega_n^2Y\sin\omega_bt$$ (7) $$\omega_n=\sqrt{\frac km}$$ (8) $$\zeta=\frac c{2m\omega_n}$$ (9) where 𝜔n: angular natural frequency, rad/s 𝜁 : damping ratio As discussed above, the solution for Eq. (7) can be obtained by finding the particular solutions of Eqs. (10) and (11) and combining them. The particular solutions are found using Eqs. (12) and (13). $$\ddot x+2\zeta\omega_n\dot x+\omega_n^2x=2\zeta\omega_n\omega_bY\cos\omega_bt$$ (10) $$\ddot x+2\zeta\omega_n\dot x+\omega_n^2x=\omega_n^2Y\sin\omega_bt$$ (11) $$x_p^{(1)}=\frac{2\zeta\omega_n\omega_bY}{\sqrt{\left(\omega_n^2-\omega_b^2\right)^2+\left(2\zeta\omega_n\omega_b\right)^2}}\mathrm{COS}\left({\mathrm\omega}_\mathrm b\mathrm t-{\mathrm\theta}_1\right)$$ (12) $$x_p^{(2)}=\frac{\omega_n^2Y}{\sqrt{\left(\omega_n^2-\omega_b^2\right)^2+\left(2\zeta\omega_n\omega_b\right)^2}}\mathrm{Sin}\left({\mathrm\omega}_\mathrm b\mathrm t-{\mathrm\theta}_1\right)$$ (13) where θ1=tan-12ζωnωbωn2-ωb2, rad The particular solution with two solutions is as follows: $$\begin{array}{l}x_p\left(t\right)=x_p^{(1)}+x_p^{(2)}=\\\omega_nY\left[\frac{\omega_n^2+(2\zeta\omega_b)^2}{(\omega_n^2-\omega_b^2)^2+(2\zeta\omega_n\omega_b)^2}\right]^\frac12\cos(\omega_bt-\theta_1-\theta_2)\end{array}$$ (14) where θ2=tan-1ωn2ζωb, rad If the amplitude of the particular solution xp(t) is defined as X, then the ratio of the magnitude X (amplitude of mass) to magnitude Y (amplitude of base vibration input to system) can be obtained. This ratio is called the displacement transmissibility, 𝜇, and can be expressed as follows: $$\begin{array}{l}\mu=\frac XY=\left[\frac{1+\left(2\zeta{\displaystyle\frac{\omega_b}{\omega_n}}\right)^2}{\left(1-\left({\displaystyle\frac{\omega_b}{\omega_n}}\right)^2\right)^2+\left(2\zeta{\displaystyle\frac{\omega_b}{\omega_n}}\right)^2}\right]^\frac12=\\\;\;\;\;\;\;\;\;\;\;\;\;\;\left[\frac{1+(2\zeta r)^2}{(1-r^2)^2+(2\zeta r)^2}\right]^\frac12\end{array}$$ (15) where r=ωbωn 𝜇 : displacement transmissibility In Eq. (15), when 𝜔b=𝜔n—that is, when the frequency of the excitation base is equal to the natural frequency—the excited part transmits the greatest motion to the system. The natural frequency is measured using the displacement transmissibility, and the dynamic characteristics are confirmed. This study measures the ratio of accelerations. The displacement and acceleration transmissibility are theoretically identical. The natural frequency of the damping ratio 𝜉 can be obtained by substituting r=1 in Eq. (15). From the theory, it can be confirmed that the maximum displacement transmissibility is obtained when the frequency ratio is 1 at any damping ratio. This can be seen in Figure 6. Amplitude and frequency ratio (r) of steady state response for several damping ratios (𝜁). $$\zeta=\frac1{2\sqrt{\mu^2-1}}$$ (16) where 𝜉 : the natural frequency of the damping ratio From the damping ratio, the spring stiffness and damping coefficient can be obtained from Eq. (17) and (18). $$k=(2\pi f_n)^2m=\omega_n^2m$$ (17) $$c=2m\omega_n\zeta$$ (18) The top figure in Figure 7 shows the actual dynamic characteristics test of a rubber mount, and the bottom figure shows a schematic of the setup. An accelerometer is mounted on the exciter to measure the mass of the testing device and other data. Setup of dynamic characteristics test. We applied a mass identical to the mass of the actual tractor cabin to the rubber mount on the test device. Accelerometers were installed on the base of the test device and on the surface of the rubber mount, and accelerations were measured while constantly increasing the vibration frequency. A number of preliminary tests confirmed that the natural frequencies appear between 3 Hz–30 Hz, and these tests were conducted in that frequency range. An example of base excitation is shown graphically in Figure 8. Example of base excitation. The specifications of the accelerometer and signal analyzer used in this study are presented in Table 3. Figure 9 shows the accelerometer and signal analyzer used in this study. Table 3. Specification of accelerometer and signal analyzer Equipment Model Specification Accelerometers PCB 333b40 Sensitivity 51.0 mV(m/s2) (±10%) Measurement Range ±98 /(m/s2) pk Broadband Resolution 0.0005 /mV(m/s2) rms Frequency Range (±5%) 0.5-3000 Hz Signal Analyzer B&K 3053-b-120 Input Channels 12 Frequency Range 0-25.6 kHz Accelerometer installation and signal analyzer. Assuming that the measured acceleration of the excitation surface is ain and the measured acceleration of mass is aout, the relationship between displacement and acceleration can be obtained as follows: $$X=\left|\frac{a_{out}}{\left(2\pi f\right)^2}\right|$$ (19) $$Y=\left|\frac{a_{in}}{\left(2\pi f\right)^2}\right|$$ (20) where f: frequency of base excitation, Hz aout: acceleration at mass, m/s2 ain: acceleration at base, m/s2 The displacement transmissibility for the frequency can be obtained by using the X and Y values obtained at the respective frequencies. The values of X and Y are shown in Figure 10. displacement at base and mass (Front rubber mount test #1). The maximum displacement transmissibility point for each frequency is called the resonance point, and the dynamic characteristics of the rubber mount are calculated using the natural frequency. The tests were conducted twice for the front and rear rubber mounts. Figure 11 shows the displacement transmissibility versus excitation frequency of the front rubber mount. As the graphs of test 1 and 2 are essentially identical, it was judged that the error between the first and second tests insignificant. Table 4 presents the dynamic characteristics calculated using the natural frequencies from the two tests. Displacement transmissibility curve of front rubber mount. Table 4. Results of front (FR) rubber mount test Item FR Test 1 FR Test 2 Mass (kg) 132.55 Natural Frequency (Hz) 15.438 15.438 Displacement Transmissibility, 𝜇 4.099 4.012 Spring Stiffness, k (N/mm) 1247.077 1247.077 Damping Coefficient, c (Ns/mm) 3.234 3.309 Damping Ratio, ζ 0.126 0.129 The applied mass was 132.55 kg, which is half of the front mass of the cabin presented in Table 1. The natural frequency and displacement transmissibility were obtained from the tests, and the spring stiffness, damping coefficient, and damping ratio were calculated using Eqs. (16)–(18). Figure 12 shows the displacement transmissibility versus excitation frequency of the rear rubber mount. As per the front rubber mount, it was judged that the error between the first and second tests was insignificant. Table 5 presents the dynamic characteristics using the natural frequency of the rear rubber mount test. Table 5. Results of rear (RR) rubber mount test RR Test 1 RR Test 2 Mass, m (kg) 146.2 Natural Frequency (Hz) 10.9375 11.125 Spring Stiffness, k (N/mm) 690.468 714.344 The results of this test confirmed that the spring stiffness and damping coefficient of the front rubber mount are greater by 77% and 71%, respectively, than those of the rear rubber mounts. This study determined the spring stiffnesses and damping coefficients, which are the dynamic characteristics of rubber mounts supporting a tractor cabin, in order to develop models that simulate ride vibrations in agricultural tractors. The theoretical background of the method proposed in the KS M 6604 standard was confirmed and the results were then verified by a test. However, a limitation of this study is that it only determined the dynamic characteristics in the vertical direction, as the KS M 6604 standard only defines tests for this direction. If the dynamic characteristics of rubber mounts in other directions can be confirmed in future studies, simulation models of ride vibration predictions will be more accurate. The dynamic characteristics of translational motion and rolling directions (rolling, yawing, and pitching) should be obtained. It is also necessary to confirm the influence of the spring stiffness and the damping coefficient of the rubber mount on the behavior of the cabin, and to investigate whether the design of the rubber mount is valid. The authors have no conflicting financial or other interests. This work was supported by the Technology Development Program (S2305668) funded by the Ministry of SMEs and Startups (MSS, Korea) Cho, J. S., K. U. Kim and H. J. Park. 2000. Determination of dynamic parameters of agricultural tractor cab mount system by a modified DSIM. Transactions of the ASAE 43(6): 1365-1369. 10.13031/2013.3033 Choi, H.-J. and S.-H. Lee. 2018. Shape optimal design of anti-vibration rubber assembly to reduce the vibration of a tractor cabin. Journal of the Korea Academia- Industrial Cooperation Society 19(4): 657-663. 10.5762/KAIS.2018.19.4.657 Chung, W. J., J.-S. Oh, Y. N. Park, D.-C. Kim and Y. -J. Park. 2017. Optimization of the suspension design to reduce the ride vibration of 90kW-class tractor cabin. Journal of the Korean Society of Manufacturing Process Engineers 16(5): 91-98. 10.14775/ksmpe.2017.16.5.091 Hur, S., W. D. Kim, C. S. Woo, W. S. Kim and S. B. Lee. 2004. The study of static and dynamic characteristics for TM rubber mount. In Proceedings of the KSAE Conference, pp. 1243-1248, Jeongseon, Kangwon, Korea: June 2004. Inman, D. J. 2014. Engineering Vibration, 4th ed. Boston: Pearson Education, Inc. KS. 2016. KS M 6604:2016. Testing method for rubber vibration isolators. Seoul, Rep. Korea: Korean Standard Association. Kwon, O. B., J. Y. Kim, Y. K. Kim, M. S. Han and C. S. Ko. 2001. The study of static and dynamic characteristics for a isolation rubber mount using the complex stiffness. In: Proceedings of the KSNVE Conference, pp. 927-932, Pyeongchang, Gangwon-do, Korea: November 2001. Lee, S.-B., J.-H. Jung, J.-H. Choi and Y.-H. Lee. 2013. Development of dynamic modeling of rubber mount. In: Proceedings of the KSNVE, pp. 87-91, Gyeongju, Gyeongsangbuk-do, Korea: October 2013. Yim, H. J., S. J. Sung and S. B. Lee. 2001. Integrated system for dynamic analysis and optimal design of engine mount systems. Journal of the KSNVE 11(1), 36-40. Go to ContentsGo to Main NevigationGo to Footer The Korean Society for Agricultural Machinery 한국농업기계학회 Department of Agricultural Engineering, National Institute of Agricultural Sciences, RDA, 310 Nongsaengmyeong-ro, Wansan-gu, Jeonju-si, Jeollabuk-do 54875, Republic of Korea Website: www.ksam76.or.kr Tel: +82-63-224-2392 / Fax: +82-303-344-2392 / E-mail: [email protected] Copyright© The Korean Society for Agricultural Machinery Korean Society for Agricultural Machinery
CommonCrawl
Large-scale Assessments in Education An IEA-ETS Research Institute Journal Software Article lsasim: an R package for simulating large-scale assessment data Tyler H. Matta1,2, Leslie Rutkowski2,3, David Rutkowski2,3 & Yuan-Ling Liaw2 Large-scale Assessments in Education volume 6, Article number: 15 (2018) Cite this article This article provides an overview of the R package lsasim, designed to facilitate the generation of data that mimics a large scale assessment context. The package features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that the achievement data, in the form of item responses, can arise from multiple-matrix sampled test designs. Furthermore, lsasim offers the possibility of simulating data that adhere to general properties found in the background questionnaire (mostly ordinal, correlated variables that are also related to varying degrees with some latent trait). Although the background questionnaire data can be linked to the test responses, all aspects of lsasim can function independently, affording researchers a high degree of flexibility in terms of possible research questions and the part of an assessment that is of most interest. An important tool for monitoring educational systems around the world, international large-scale assessments (ILSAs) are cross-national, comparative studies of achievement. ILSAs are used to measure educational achievement in select content domains for representative samples of students enrolled in primary and secondary educational systems. The achievement tests that students take are intended to be an adequate representation of what students know and can do in the relevant content areas (e.g., math, science, and reading). And the results of these assessments are used to compare educational systems (usually, but not exclusively, countries) and to inform policy, practice, and educational research both nationally and internationally. In terms of numbers of participants, these studies have grown tremendously over the past few decades. Today, two-thirds of all countries with populations greater than 30,000 have participated in one or more international or regional large-scale assessments (Lockheed et al. 2015). Among the most well-known ILSAs are the Trends in International Mathematics and Science Study (TIMSS) and the Programme for International Student Assessment (PISA). On a 4 year cycle, beginning in 1995, TIMSS measures mathematics and science in a representative sample of fourth and eighth grade students. Starting in 2000 and every 3 years afterward, PISA assesses 15-year-olds enrolled in school in math, science, and reading. Besides the subject assessment (e.g. math, science, and reading tests), these studies also solicit information from students, their teachers, principals, and their parents regarding beliefs, attitudes, experiences, and the context of schooling. With over half a million students from 70 educational systems taking part, PISA is now the largest such study (OECD 2017). New versions of PISA, such as PISA for Development, targeting developing economies, and PISA for schools, focused on providing participating schools with internationally comparable results, will only increase these numbers and bring international assessments to new contexts and audiences. The quantitative nature, scale, and scope of these and related modern educational surveys necessitates a fairly sophisticated approach to the survey design, sampling, analysis, and reporting. We elaborate subsequently. The first such study, the Pilot Twelve-Country Study (Foshay et al. 1962) was revolutionary at the time—before email, the Internet, and fast, desktop computers. By modern standards of educational and psychological measurement AERA, APA, and NCME (2014), however, the study was primitive, using classical test theory methods to uncover data of relatively low quality on a single test form. Beginning with the 1995 cycle of TIMSS, international assessments adopted an item response theory-based approach to scaling and a sophisticated booklet design, referred to as multiple matrix sampling (MMS) (Shoemaker 1973). Essentially, MMS is a method that divides test items into non-overlapping blocks that are then assembled into booklets according to, typically, a variant of a balanced incomplete block design (Gonzalez and Rutkowski 2010; Rutkowski et al. 2014). The result is, in the case of most international large-scale assessments (ILSAs), 10 or more hours of testing content delivered in two-hour booklets (Martin et al. 2016; OECD 2017). A concrete example, located in Table 1, is the 2011 Trends in International Mathematics and Science Study (TIMSS) design that distributed 429 total mathematics and science items across 14 non-overlapping mathematics blocks and 14 non-overlapping science blocks. Table 1 2011 TIMSS booklet design Only a fraction of the students in the sample take any one item, and any selected student takes only a fraction of the total available items. As a result, the actual distribution of student proficiency cannot be approximated by its empirical estimate (Mislevy et al. 1992b). Further, traditional methods of estimating individual achievement introduce an unacceptable level of uncertainty and the possibility of serious aggregate-level bias (Little and Rubin 1983; Mislevy et al. 1992a). As one means for overcoming the methodological challenges associated with multiple-matrix sampling, large-scale assessment programs adopted a population or latent regression modeling approach that uses marginal estimation techniques to generate population- and subpopulation-level achievement estimates (Mislevy 1991; Mislevy et al. 1992a, b) More specifically, using information from background questionnaires, other demographic variables of interest and responses to the cognitive portion of the test, student achievement is estimated via a latent regression model, where achievement \((\theta )\) is treated as a latent or unobserved variable for all examinees. Essentially, the limited achievement test responses, complete student background questionnaires responses, and select demographic information are used in conjunction with a measurement model-based extension of Rubin's (1987) multiple imputation approach to generate a proficiency distribution for the population (or sub-population) of interest (Beaton and Johnson 1992; Mislevy et al. 1992a, b; von Davier et al. 2006). A short, slightly more technical description follows. As in multiple imputation methods, an imputation model (called a "conditioning model") is used to derive posterior distributions of student achievement. This model uses all available student data (cognitive as well as background information) to generate a conditional proficiency distribution from which to draw a number of plausible values (usually five) for each student on each latent trait (e.g. mathematics, science, and associated sub-domains). Because \(\theta\) is a latent, unobserved variable for every examinee, it is reasonable to treat it as a missing value and to approximate statistics involving \(\theta\) by its expectation. That is, for any statistic, t, \(\hat{t}(\mathbf {X}, \mathbf {Y})=E[t(\theta ), \mathbf {Y}|\mathbf {X}, \mathbf {Y}]=\int t(\theta ,\mathbf {Y})p(\theta |\mathbf {X}, \mathbf {Y})d\theta\) where \(\mathbf {X}\) is a matrix of achievement item responses for all examinees and \(\mathbf {Y}\) is the matrix of responses of all examinees to the set of administered background questions. Because closed-form solutions are typically not available, random draws from the conditional distributions are drawn for each sampled examinee j (Mislevy et al. 1992b). In line with missing data practices (Rubin 1976, 1987), values for each examinee are drawn multiple times. These are typically referred to as plausible values in LSA terminology or multiple imputations in missing data literature. Using Bayes' theorem and the IRT assumption of conditional independence, $$\begin{aligned} p(\theta |\mathbf {x}_j, \mathbf {y}_j) \propto P(\mathbf {x}_j|\theta ,\mathbf {y}_j)p(\theta |\mathbf {y}_j) = P(\mathbf {x}_j|\theta )p(\theta |\mathbf {y}_j) \,, \end{aligned}$$ where \(P(\mathbf {x}_j|\theta )\) is the likelihood function for \(\theta\) and \(p(\theta |\mathbf {y}_j)\) is the distribution of \(\theta\) for a given vector of response variables. Usually, it is assumed that \(\theta\) is normally distributed according to the following model $$\begin{aligned} \theta _{j} = {\varvec{\Gamma }}^{\prime }\mathbf {y}_{j} + \epsilon _{j} \end{aligned}$$ where \(\epsilon _{j} \sim {\mathcal {N}}(0,{\varvec{\Sigma }})\) and \({\varvec{\Gamma }}\) and \({\varvec{\Sigma }}\) have to be estimated. The role of simulations in large-scale assessment research and development In the past 20 years, national and international assessments have expanded significantly in terms of the number of national- or system-level participants, platforms (computerized in addition to paper and pencil), content domains (e.g., collaborative problem solving), and the degree to which participating countries differ in economic, cultural, linguistic, geographic, and other terms. To that end, two areas of research in large-scale assessment are evaluating the performance of currently used methods given the changing nature of LSAs and developing new methods. In both cases, areas of emphasis can conceivably include test design, administration, data collection, sampling, or other relevant areas. As study administrators are naturally cautious about implementing new designs and methods without evidence of their merit and worth, a viable option for testing new methods is through simulation. Further, using empirical data to examine the performance of current and new methods is limited by the fact that we can never know the true, underlying population values of item- or person-parameters. Simulation is a low-cost powerful means for conducting methodological research in the area of large-scale assessment. Examples include Adams et al. (2013), Rutkowski and Zhou (2015). Traditionally, the mandate of large-scale assessments surrounds measuring and reporting achievement across populations of interest. As such, large-scale assessment developers prioritize achievement measures, in terms of framework development, psychometric quality, analysis, and reporting (OECD 2014; Martin et al. 2016). Nevertheless, background questionnaires serve to contextualize educational achievement and provide opportunities to understand correlates of learning. To that end, the background questionnaire and achievement measures have distinct frameworks, and different teams work to develop and innovate in each respective area. This (in some cases arbitrary) distinction between the achievement test and background questionnaires frequently leads researchers to regard each component separately for many methodological investigations. Therefore, lsasim (Matta et al. 2017) simulates data in a way that treats background questionnaire responses as separate from but related to the achievement test. Software for generating large-scale assessment data The goal of lsasim is to provide a set of functions that enable users to design and modify test designs that are commonly utilized in large-scale educational surveys. Such goals are similar to the goals of catR (Magis and Raiche 2012) for generating item response patterns from computer adaptive tests and mstR (Magis et al. 2017) for generating item response patterns from multi-stage tests. The difference, however, is that multi-matrix sampling designs utilized in large-scale assessments are not (yet) adaptive, and can thus, depend on other packages to estimate item parameters and achievement estimates. Although generation of item responses, given a set of item parameters and a true score, for a fixed test is not unique, none of the existing R (R Core Team 2017) IRT packages provide a means to establish multi-matrix sampling designs. Two of the most commons IRT packages used are TAM (Robitzsch et al. 2017) and mirt (Chalmers 2012). Both packages include functions to simulate item response patterns, but every "observation" will be given a generated response to every item. Users would have to delete item responses post-hoc to arrive at data that resembles a matrix sampling design. In addition to the inability to generate item responses under a multi-matrix sampling designs, none of the IRT packages reviewed provide a means for generating responses to "background questionnaires," data that are commonly used in the estimation of achievement. To include responses to background variables, one would need to develop functions on their own, or utilized an alternative package to generate mixed normal, bivariate, and ordinal data such as GenOrd (Barbiero and Ferrari 2015). With lsasim designed to generate item responses, it has no functionality to estimate item parameters or achievement estimates. For this, users should turn to existing packages, for example, TAM, as is demonstrated later in this article. The data output from lsasim is formatted to be used with TAM or mirt without any further data manipulation. Furthermore, the ibd package (Mandal 2018) can be used in tandem with lsasim to generate balanced incomplete designs. Simulation methodology Generating correlated questionnaire data Let \(X = \{X_{1}, X_{2}, \ldots , X_{p}, \ldots , X_{P}\}\) be a set of continuous random variables and \(W = \{W_{1}, W_{2}, \ldots , W_{q}, \ldots , W_{Q}\}\) be a set of ordinal (possibly dichotomous) random variables. For any \(W_{q}\), let there be \(1, \ldots , k_{q}, \ldots , K_{q}\) ordered response categories where \(p(W_{q} = k_{q}) = \pi _{q,k}\) such that \(\sum _{1 \le k \le K} \pi _{q,k} = 1\). Furthermore, let \(\mathbf {R}\), be a \((P+Q) \times (P+Q)\) possibly heterogeneous correlation matrix which includes (a) Pearson product-moment correlations for \(\rho (X_{p}, X_{p^{\prime }})\); (b) polychoric correlations for any \(\rho (W_{q}, W_{q^{\prime }})\); and (c) polyserial correlation for any \(\rho (X_{p}, W_{q})\). Often in the development of psychological tests, items are designed with the assumption that ordinal item responses map to a continuous latent trait. Thus, we can specify an underlying continuous variable, \(W^{\star }_{q}\), for any ordinal variable, \(W_{q}\). The relationship between \(W_{q}\) and \(W^{\star }_{q}\) is $$\begin{aligned} W_{q} = \left\{ \begin{array} {@{} l r@{} c@{} c@{} c@{} l @{}} \; 1 \quad \mathrm {if}, &{} -\infty \, &{} \,< \, &{} \, W^{\star }_{q} \, &{} \, \le \, &{} \, \alpha _{q,1} \\ \; 2 \quad \mathrm {if}, &{} \alpha _{q,1} \, &{} \,< \, &{} \, W^{\star }_{q} \, &{} \, \le \, &{} \, \alpha _{q,2} \\ \; \vdots \\ \; k \quad \mathrm {if}, &{} \alpha _{q,k-1} \, &{} \,< \, &{} \, W^{\star }_{q} \, &{} \, \le \, &{} \, \alpha _{q,k} \\ \; \vdots \\ \; K \quad \mathrm {if}, &{} \alpha _{q,K-1} \, &{} \, < \, &{} \, W^{\star }_{q} \, &{} \, \le \, &{} \, \infty \,. \end{array} \right. \end{aligned}$$ where \(\alpha _{q,k}\) is the kth threshold for \(W^{\star }_{q}\), delineating responses \(k-1\) and k on the scale of \(W^{\star }_{q}\). To simulate correlated mixed-type data, we need only a \((P+Q) \times (P+Q)\) data-generating correlation matrix, \(\mathbf {R}\) and the \(K_{q}\) marginal probabilities \(\pi _{q}\) corresponding to each ordinal variable \(W_{q}\). First, generate N replicates from \(P+Q\) independent standard normal random variables \(\mathbf {Z} = \{Z(X_{1}), \ldots , Z(X_{P}), \ldots , Z(W^{\star }_{1}), \ldots , Z(W^{\star }_{Q})\}\), such that an \(N \times (P+Q)\) data matrix, \(\mathbf {Z}\), is obtained. Second, let \(\mathbf {L}\) be the lower triangle matrix of the Cholesky factorization of \(\mathbf {R}\) where \(\mathbf {R} = \mathbf {L} \mathbf {L}^{\prime }\). We can transform \(\mathbf {Z}\) to \(\{\mathbf {X}, \mathbf {W^{\star }}\}\) using \(\mathbf {L}\) such that \(\{\mathbf {X}, \mathbf {W^{\star }} \} = \mathbf {Z} \mathbf {L}\). Finally, we transform the latent variables \(\mathbf {W^{\star }}\) to \(\mathbf {W}\) by coarsening based on Eq. 3. Generating IRT-based data In order to generate data from many models in the IRT-family, we specify an overly general model accompanied by explicit constraints, given the generating information. The general model combines the three-parameter model for dichotomous item responses and the generalized partial credit model for ordered responses, $$\begin{aligned} p(u_{ij} = k | \theta _{j}) = c_{i} + (1 - c_{i}) \frac{\text {exp} \left[ \sum _{u = 1}^{k} D a_{i} (\theta _{j} - b_{i} + d_{iu}) \right] }{\sum _{v = 1}^{K_{i}} \text {exp} \left[ \sum _{u = 1}^{v} D a_{i} (\theta _{j} - b_{i} + d_{iu}) \right] } \, \end{aligned}$$ where k is the response to item i by respondent j, \(\theta _{j}\) is the respondent's true score, and \(K_{i}\) is the maximum score on item i. Furthermore, \(b_{i}\) is the average difficulty for item i, \(d_{iu}\) is the threshold parameter between scores u and \(u-1\) for item i, \(a_{i}\) is the item's discrimination parameter, \(c_{i}\) is the item's pseudo-guessing parameter, and D is a scaling constant for the item. Because the partial credit model does not, generally, include a guessing parameter, we place constraints on parts of the model given the known parameters. Namely, when multiple thresholds are specified for a given item, \(u > 1\), the pseudo-guessing parameter is constrained to zero, \(c = 0\), resulting in the generalized partial credit model. Further constraining \(a = 1\) results in the partial credit model. When only one threshold is specified, Eq. 4 reduces to $$\begin{aligned} p(y_{ij} = k | \theta _{j}) = c_{i} + (1 - c_{i}) \frac{\text {exp} \left[ D a_{i} (\theta _{j} - b_{i}) \right] }{1 + \text {exp} \left[ D a_{i} (\theta _{j} - b_{i}) \right] } \,. \end{aligned}$$ From here, constraining \(c = 0\) results in the two-parameter item response model and constraining \(c= 0, a = 1\) results in the Rasch model. The lsasim package The lsasim package contains a set of functions that facilitate the generation of large-scale assessment data. The package can be divided into two interrelated sets of functions: one set for generating background questionnaire data and another set for generating the cognitive data. This section provides a description of each function within the package and demonstrates how they can be used. To start, we set a seed for replicability purposes and load the lsasim package and the polycor package (Fox 2016), both available on CRAN. Background questionnaire data The main function for generating background questionnaire data is aptly named questionnaire_gen. The function facilitates the generation of correlate continuous, binary, and ordinal data by specifying the cumulative proportions for each background item and a correlation matrix. The n_obs argument specifies the number of observations (examinees) to generate. The argument cat_prop takes a list of vectors, each of which contain the cumulative proportions for a given background item. The length of the list, that is, the number of vectors within the list, indicates the number of background items to be generated. Each vector in the list should end with 1 such that the length of each vector specifies the number of response categories for that background item. For continuous items, the vector should contain one element, 1. The code above provides an example of cumulative proportions for two background items. The first background variable has one category, indicating a continuous response. The second background item has four response categories with marginal population proportions of 0.23, 0.31, 0.27, and 0.19, respectively. The argument cor_matrix takes a possibly heterogeneous correlation matrix, consisting of Pearson correlations between numeric variables, polyserial correlations between numeric and ordinal variables, and polychoric correlations between ordinal variables. That is, all discrete variables are assumed to have an underlying continuous latent variable. In the above example, we specify a polyserial correlation of .7 between the discrete background item and the continuous item. Notice that the size of ex_cor is equal to the length of the ex_prop as the size and order of cor_matrix corresponds to cat_prop. Using ex_prop and ex_cor, we simulate one dataset with 1000 observations. By default, continuous variables are generated from standard normals, \({\mathcal {N}}(0,1)\). Notice the continuous variable q1 is a numeric variable and the discrete variable q2 is a factor with four levels. A third variable, subject, is the unique identifier for each observation. With our simulated data set, we see the first two moments of q1 and the marginal proportions for q2 are both well-recovered. Additionally, the polyserial correlation is also well-recovered using the hetcor function from the polycor package. It is important to note that converting the factor variables to numeric and estimating a Pearson correlation will not recover the generating correlation matrix. The questionnaire_gen function includes three optional arguments, c_mean, c_sd, and theta. The arguments c_mean and c_sd are used to scale the continuous variables where c_mean takes a vector of means for those continuous variables and c_sd takes a vector of standard deviations. The lengths of c_mean and c_sd are equal to the number of continuous items to be generated. Specification of c_mean and/or c_sd results in a continuous variables i to be distributed \({\mathcal {N}}\)(c_mean[i], c_sd[i]). Finally, theta is a logical argument where theta = TRUE results in the first continuous background item to be named "theta" in the resulting data frame. This optional argument is only for convenience when generating both background questionnaire data and cognitive data. Notice in the example above, the continuous variable is now named theta and has a mean and standard deviation close to that specified by c_mean and c_sd. When the background questionnaires are not of substantive importance, lsasim provides two functions for generating the correlation matrix and list of marginal cumulative proportions. The cor_gen function generates a random correlation matrix by specifying the number of variables via the n_var argument. The proportion_gen function generates a list of random cumulative proportions using two arguments. The first argument cat_options takes a vector whose entries specify the types of items to be generated. In the code below, cat_options = c(1, 2, 3) specifies continuous, two-category, and three-category item types to be generated. The second argument, n_cat_options is a vector of equal length to cat_options, which specifies the number of each item type to be generated. Below, n_cat_options = c(3, 2, 1) indicates that there will be three continuous items, two two-category items, and one three-category item generated (six items in all). Both the random correlation matrix, ex_cor_gen and the random marginal proportions, ex_prop_gen, can then be used by questionnaire_gen. We will generate background data for 40 examinees to be used later in this section. Cognitive data The cognitive assessments for LSAs are much more involved than the background questionnaire. As mentioned above, the cognitive assessments use an IRT measurement model administered using a multi-matrix sampling scheme. The package lsasim has been designed to provide researchers with extensive flexibility in varying these design features while providing valid default solutions as an option. There are five functions that make up the cognitive data generation, which we organize here under three categories (a) item parameter generation: item_gen; (b) test assembly and administration: block_design, booklet_design, and booklet_sample; and (c) item response generation: response_gen. Item parameter generation Although researchers may wish to use pre-determined item parameters, the item_gen function enables the flexible generation of item parameters from item response models. The arguments n_1pl, n_2pl, and n_3pl specify how many one-, two-, and three-parameter items will be included in the item pool. For this example, we will generate five two-parameter items and ten three-parameter items. The argument thresholds specifies the number of thresholds for the one- and two-parameter. Specifying thresholds = 2 will generate two threshold parameters for each of the five two-parameter items. Finally, the arguments b_bounds, a_bounds, and c_bounds specify the bounds of the uniform distributions used to generate the b, a, and c parameters, respectively. Note that a_bounds are only applied to the two- and three-parameter items and c_bounds are applied to three-parameter items only. The above example shows the item information for the 15 items in item_pool. All 15 items have a \(b_{i}\) parameter, which is the average difficulty for the item. The five two-parameter items were specified as generalized partial credit items with two thresholds. Thus, item 1 though item 5 have two d parameters, d1 and d2 such that \(b_{i} + d_{ik}\) is the kth threshold for item i. All 15 items have a discrimination parameter, \(a_{i}\), while only item 6 through item 15 have a c parameter (pseudo-guessing). The last two variables in item_pool, k and p, are indicators to identify the number of thresholds and whether the item is from a 1PL, 2PL, or 3PL model, respectively. Test assembly and administration Large scale assessments are typically designed such that items are compiled into blocks or clusters, which are then assembled into test booklets. The goal is to develop non-overlapping blocks of items that can be assembled into test booklets. Two functions, block_design and booklet_design, facilitate the assembly of the test booklets while one function, booklet_sample facilitates the administration of the booklets to subjects. The test assembly process was split into two functions to provide users ample flexibility of how tests are constructed. The first step in the test assembly is to determine the number of blocks and the assignment of items to those blocks. The function block_design facilitates this process with two required arguments and one optional argument. The n_blocks argument specifies the number of blocks while the item_parameters argument takes a data frame of item parameters. The default allocation of items to blocks is a spiraling design. For \(1, 2, \ldots , H\) item blocks, the first item is assigned to block 1, item 2 is assigned to the block 2, and item H is assigned to block H. The process is continued such that item \(H+1\) is assigned to block 1, item \(H+2\) is assigned to block 2 and item \(H+H\) is assigned to block H until all items are assigned to a block. The default functionality of the block design is illustrated by administering the 15 items in item_pool across 4 blocks. The function block_design produces a list that contains two elements. The first element, block_assignment, is a matrix that that identifies which items from the item pool correspond to which block. The column names of block_assignment begin with b to indicate block while the rows begin with i to indicate item. For block b1, the first item, i1, is the first item from item_pool, the second item, i2, is the fifth item from item_pool, the third item, i3, is the ninth item from item_pool, and the fourth item, i4, is the 13th item from item_pool. Because the 15 items do not evenly distribute across 4 blocks, the fourth block only contains three items. To avoid dealing with ragged matrices, all shorter blocks are filled with zeros. The second element in block_ex is a table of descriptive statistics for each block. This table indicates the number of items in each block and the average difficulty for each block. Again, notice blocks 1 though 3 each have four items while block 4 only has three items. Furthermore, the easiest block is b4 with an average difficulty of − 0.267 while the most difficult block is b1 with an average difficulty of 0.718. Note that for partial credit items, \(b_{i}\) is used in the calculation of the average difficulty. For user-specified block construction, we can specify an indicator matrix where the number of columns equals the number of blocks and the number of rows equals the number of items in the item pool. The 1s indicate which items belong to which blocks. Given the example above, we will assign items 1 through 4 to block 1, items 5 though 8 to block 2, items 9 through 12 to block 3 and items 13 though 15 to block 4. Below, block_ex2 demonstrates how the matrix is used with the item_block_matrix argument and the items in item_pool. After the items have been assigned to blocks, the blocks are then assigned to test booklets. The booklet_design function facilitates this process with one required argument and one optional argument. The item_block_assignment argument takes the block_assignment matrix from block_design. Like block_design, booklet_design provides a default spiraling booklet assembly method which is illustrated in Table 2. Table 2 Default booklet design book_ex uses the default item-block assembly of block_ex. In the above output, book B1 contains the items from block 1 and block 2, booklet B2 contains the items from block 2 and block 3, booklet B3 contains the items from block 3 and block 4, and book B4 contains the items from block 1 and block 4. Notice that the first two test booklets contain eight items while the last two books contain seven items. This is because block 4 only has three items whereas blocks 1, 2, and 3 have four times. Like block_design, booklet_design avoids ragged matrices by filling shorter booklets with zeros. Users can also explicitly specify the booklet assembly design using the optional argument book_design. By specifying a booklet design matrix, users can control which item blocks go to which booklets. The book_design argument takes an indicator matrix where the columns indicate the item blocks and the rows indicate the booklets. In the code below, we create a matrix, block_book_design, that will create six test booklets. Booklet 1 (row 1) will include items from blocks 1, 3 and 4 while booklet 6 (row 6) will include items from block 1 only. Notice how booklets B1 and B5, contain 11 items each while booklets B2 and B6 contain four items each. Booklet administration The final component of the test assembly and administration is the administration of test booklets to examinees. The function booklet_sample facilitates the distribution of test booklets to examinees. The two required arguments are n_subj, the number of examinees, and book_item_design, the output from booklet_design. The default sampling scheme makes all books equally likely but the optional argument book_prob takes a vector of probabilities to make some books more or less likely to be administered. The logical argument resample will resample booklets until the difference in booklets sampled is less than e or iter attempts. The resampling functionality may be useful when n_subj is small and only one dataset is being generated. Using book_ex from above, we administer the four books to 40 examinees. The result is a data frame with three columns: subject, book, and item. The data frame is organized in a long (univariate) format where there is one row for each subject-item combination. The long format is required for generating item responses with the response_gen function. As can be seen in the output above, subject 1 has been administered booklet 2 while subject 2 has been administered booklet 4. Item response generation Recall that the item_gen function enables the user to specify different combinations of item response models for a given item pool. The response_gen function will generate item responses for all models given four required arguments and five optional arguments. Both arguments subject and item take length-N vectors that provide the subject by item information where \(N = \sum _{1:J} n_{j}\) where \(n_{j}\) is the number of items in the test for examinee j. The arguments theta takes a J-length vector of true latent proficiency values for the examinees, where J is the total number of examinees. Finally, because the simplest model is the one-parameter model, only the b_par argument is required for any items. The b_par argument takes an I-length vector of item difficulties where I is the total number of items in the item pool. The optional arguments a_par and c_par also take I-length vectors for the corresponding item parameters, while d_par takes an I-length list where each element in the list is an \((K_{i} - 1)\)-length vector containing the thresholds for each item. The argument item_no is used only when a subset of items are used from a given item pool. Finally, the argument ogive allows the user to omit the scaling constant for a logistic ogive (default) or to include a normal ogive, ogive = "Normal". Using the generated background data in ex_background_data3, generated item parameters in item_pool, and the booklet sampling information in book_admin, we generate item responses. The result of response_gen is a wide (multivariate) data frame where there is one row per examinee and each item is a column with the final variable indicating the subject ID. Because not every examinee sees every item, items not administered are considered missing. Combining cognitive and background questionnaire data At this point, we've generated correlated background data that includes a continuous true score (e.g., \(\theta\)) and cognitive data. The lsasim package is designed such that both generated datasets have equal rows for easy merging using the subject variable in each data frame. The resulting dataset, ex_full_data is the generated large-scale assessment data. Note that this section provided a general overview of the lsasim package. Interested readers can turn to the lsasim wiki for further documentation and testing results. In particular, one can find vignettes that further illustrate item parameter generation, test assembly, and test adminstration. Generating data from PISA 2012 In this section, we demonstrate how existing background information, item parameters, and booklet designs can be used to generate data. The lsasim package includes prepared data from the 2012 administration of Programme for International Student Assessment. PISA 2012 background questionnaire The lsasim package includes the cumulative probabilities and heterogeneous correlation matrix for 18 background questionnaire items and a single mathematics plausible value. The 18 items comprise three scales, perseverance, openness to problem solving, and attitudes toward school. The cumulative proportions and correlation matrix were estimated using Switzerland student questionnaire data available from the PISA 2012 data base. It is important to note, however, that this background information is included for demonstration purposes only and is not suitable for valid inferences. pisa2012_q_marginal is a list of cumulative proportions for all 19 variables. There are 10 five-category items, eight four-category items, and 1 continuous variable. Notice that each vector is given a name that corresponds to the item name in the PISA 2012 student codebook (also available from the PISA 2012 data base). The \(19 \times 19\) heterogeneous correlation matrix is pisa2012_q_cormat. Printing the sub-matrix for the first five items reveals that the matrix also includes the item names. With these two pieces of information, we can generate background questionnaire responses and mathematics scores for 1000 test takers. Notice that the variable names in pisa_background take the name of the PISA items. When a named list is used with the questionnaire_gen function, the resulting variables in the data frame will adopt those names. PISA 2012 mathematics assessment The measurement model used in the 2012 administration of PISA was a partial credit model (OECD 2014). The technical manual provides the item parameters for all 109 mathematics items (OECD 2014, pp. 406–409), which are stored in pisa2012_math_item. Notice that each item has an item name, an item number, a b parameter, and, for those partial credit items, two d parameters. The PISA technical manual also provides information regarding the allocation of the 109 items to ten item blocks (OECD 2014, pp. 406–409). pisa2012_math_block is the PISA 2012 block design matrix suitable for the block_design function. The design is stored as a data frame object, as it also includes the item names for consistency. Because only the indicator component of pisa2012_math_block is needed, we subset it and coerce it to a matrix. The PISA 2012 item parameters and corresponding block design matrix are used to create the block assignment matrix. Printing the block_descriptives provides a quick check of the block lengths and average difficulties. The booklet design information for the PISA mathematics assessment was also translated to a book design matrix, pisa2012_math_booklet (OECD 2014, p. 31). Like the block design, the book design is also stored as a data frame object to include a variable for booklet IDs. Note that this matrix was designed to construct the 13 standard booklets and does not include the easy booklets. Each row indicates a book while each column indicates an item block. Again, we subset the data frame and coerce it to a matrix. From here, we can use it with block_assignment from pisa_blocks to create the 13 booklets. The 13 standard books can now be administered to the 1000 test takers. Because we have excluded the easy booklets, not all 109 items will be used to generate responses. Because the response_gen function matches item information based on a unique item number, we must subset the item bank to exclude those items in pisa2012_math_item that were not administered in the standard test books. To accomplish this, first obtain a sorted vector of unique items administered to the test takers, subitems. This vector can be used to subset the item bank, pisa2012_math_item. The resulting object, pisa_items, contains item information for the 84 administered items. Because we are using a subset of items whose item numbers are not sequential, we use the optional argument item_no when using the response_gen function. The variables of the resulting data frame are renamed to the PISA 2012 items names. The final procedure to obtain the generated PISA 2012 data is to merge the background data with the mathematics data. The final dataset, pisa_data contains 104 columns and 1000 rows. Printing the first 10 observations for select background questionnaire items and mathematics assessment items provides a feel for the resulting data. Test design simulation study We now conduct a simple simulation to demonstrate that the default test generating functions operate as intended. One hundred one-parameter items were generated using the item_gen function. Those items were distributed across five item blocks, which were assembled into 5 booklets. Each booklet contained 40 items. Both the item block assembly and booklet assembly use the default spiraling design described above. A true score, \(\theta\), was generated for 10,000 test takers in two countries (5000 test takers in each country). For the first country, \(\theta \sim {\mathcal {N}}(0, 1)\) while for the second country, \(\theta \sim {\mathcal {N}}(0.25, 1)\). The five booklets were distributed evenly across the two countries and country-specific means and variances were estimated using TAM (Robitzsch et al. 2017). The model is such that the intercept for country 1 is constrained to zero so that we are estimating the group difference. As seen in Table 3, the country means and variances were recovered. Table 3 Simulation results, means and standard deviations of country-specific parameters based on 100 replications ILSAs are tasked with obtaining sound measures of what students from around the world know and can do in the relevant content areas, as well as obtaining a host of background variables to aid in the contextualization of those measures. These tasks place a unique set of requirements on the test design and psychometric modeling. Although innovations in ILSAs can come from the re-analysis of past assessments, those data are fundamentally constricted to a particular design and by extant data. Due to the scope of ILSAs, pilot testing is highly restricted, leaving simulation studies as the primary means for understanding issues and possible solutions within the ILSA arena. The intention for lsasim was to develop a minimal number of functions to facilitate the generation of data that mimics the large scale assessment context to the extent possible. To that end, lsasim offers the possibility of simulating data that adhere to general properties found in the background questionnaire (mostly ordinal, correlated variables that are also related to varying degrees with some latent trait). The package also features functions for simulating achievement data according to a number of common IRT models with known parameters. A clear advantage of lsasim over other simulation software is that the achievement data, in the form of item responses, can arise from multiple-matrix sampled test designs. Although the background questionnaire data can be linked to the test responses, all aspects of lsasim can function independently, affording researchers a high degree of flexibility in terms of possible research questions and the part of an assessment that is of most interest. Built in default functionality also allows researchers to opt for randomly chosen population parameters. Alternatively, users can specify their own test design specifications and population parameters, offering the possibility of full control over the research design and data generation process. By way of introduction, the paper described and briefly illustrated the eight functions that make up the package. Because researchers will in many circumstances use information from previous assessments for simulation purposes, the paper went on to demonstrate how LSA data can be generated from parameter estimates and design features of PISA 2012. Finally, a small simulation showed that using the default test assembly functions recovered known population proficiency parameters for two groups. Although we demonstrated the soundness of the package's default settings, we expect users to rely on those default settings only for aspects of a given simulation design that are considered to be nuisances. For example, a study designed to examine the efficiency of various item block designs could rely on the default background questionnaire functions without loss of generality. Alternatively, a researcher interested in studying background questionnaire invariance across heterogeneousness populations might utilize many of the default test assembly functions. Otherwise, we generally assume that users will bring a set of known or plausible population parameters that will provide the basis for further investigations. We believe that lsasim can be a useful tool for operational test developers and basic and applied measurement researchers. As national and international assessments branch into new platforms and populations, it is important that researchers with a solid background in measurement and large-scale test design have a ready means for evaluating the performance of new and existing methods. Finally, as a reminder, the default PISA parameters that are included with lsasim are not intended to be used to infer anything about the 2012 PISA administration. Rather, they provide an illustrative example of a test design and associated parameters that approximates an operational setting. Adams, R. J., Lietz, P., & Berezner, A. (2013). On the use of rotated context questionnaires in conjunction with multilevel item response models. Large-scale Assessments in Education, 1(1), 5. American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). Standards for educational and psychological testing 2014. AERA. Barbiero, A., & Ferrari, P. A. (2015). GenOrd: Simulation of discrete random variables with given correlation matrix and marginal distributions [Computer software manual]. Retrieved from https://CRAN.R-project.org/package=GenOrd (R package version 1.4.0) Beaton, A. E., & Johnson, E. G. (1992). Overview of the scaling methodology used in the national assessment. Journal of Educational Measurement, 29(2), 163–175. Chalmers, R. P. (2012). mirt: A multidimensional item response theory package for the R environment. Journal of Statistical Software, 48(6), 1–29. Retrieved from http://www.jstatsoft.org/v48/i06/ Foshay, A. W., Thorndike, R., Hotyat, F., Pidgeon, D., & Walker, D. (1962). Educational achievement of thirteen-year-olds in twelve countries (Tech. Rep.). Hamburg: UNESCO Institute for Education. Retrieved from http://unesdoc.unesco.org/images/0013/001314/131437eo.pdf Fox, J. (2016). polycor: Polychoric and polyserial correlations [Computer software manual]. Retrieved from https://CRAN.R-project.org/package=polycor (R package version 0.7-9) Gonzalez, E., & Rutkowski, L. (2010). Principles of matrix booklet designs and parameter recovery in large-scale assessments. IERI Monograph Series, 3, 125–156. Little, R. J. A., & Rubin, D. B. (1983). On jointly estimating parameters and missing data by maximizing the complete-data likelihood. The American Statistician, 37(3), 218–220. Lockheed, M., Prokic-Breuer, T., & Shadrova, A. (2015). The experience of middle-income countries participating in PISA 2000–2015. Washington, DC: World Bank Publications. https://doi.org/10.1787/9789264246195-en. Magis, D., & Raiche, G. (2012). Random generation of response patterns under computerized adaptive testing with the R package catR. Journal of Statistical Software, 48(8), 1–31. https://doi.org/10.18637/jss.v048.i08. Magis, D., Yan, D., & von Davier, A. (2017). mstR: Procedures to generate patterns under multistage testing [Computer software manual]. Retrieved from https://CRAN.R-project.org/package=mstR (R package version 1.0) Mandal, B. N. (2018). ibd: Incomplete block designs [Computer software manual]. Retrieved from https://CRAN.R-project.org/package=ibd (R package version 1.4) Martin, M. O., Mullis, I. V. S., & Hooper, M. (Eds.). (2016). Methods and procedures in TIMSS 2015. Boston: TIMSS & PIRLS International Study Center, Boston College. Retrieved from http://timssandpirls.bc.edu/publications/timss/2015-methods.html Matta, T., Rutkowski, L., Rutkowski, D., & Liaw, Y. (2017). lsasim: Simulate large scale assessment data [Computer software manual]. Retrieved from https://CRAN.R-project.org/package=lsasim (R package version 1.0.0) Mislevy, R. J. (1991). Randomization-based inference about latent variables from complex samples. Psychometrika, 56(2), 177–196. Mislevy, R. J., Beaton, A. E., Kaplan, B., & Sheehan, K. M. (1992a). Estimating population characteristics from sparse matrix samples of item responses. Journal of Educational Measurement, 29(2), 133–161. Mislevy, R. J., Johnson, E. G., & Muraki, E. (1992b). Scaling procedures in NAEP. Journal of Educational and Behavioral Statistics, 17(2), 131–154. OECD. (2014). PISA 2012 technical report (Tech. Rep.). Paris: OECD Publishing. Retrieved from https://www.oecd.org/pisa/pisaproducts/PISA-2012-technical-report-final.pdf OECD. (2017). PISA 2015 technical report (draft). Paris: OECD Publishing. R Core Team. (2017). R: A language and environment for statistical computing [Computer software manual]. Vienna: R Core Team. Retrieved from https://www.R-project.org/ Robitzsch, A., Kiefer, T., & Wu, M. (2017). TAM: Test analysis modules [Computer software manual]. Retrieved from https://CRAN.R-project.org/package=TAM (R package version 2.0-37) Rubin, D. B. (1976). Inference and missing data. Biometrika, 63(3), 581–592. Rubin, D. B. (1987). Multiple imputation for nonresponse in surveys. Hoboken, NJ: Wiley. Rutkowski, L., von Davier, M., Gonzalez, E., & Zhou, Y. (2014). Assessment design for international large-scale assessment. In L. Rutkowski, M. von Davier, & D. Rutkowski (Eds.), Handbook of international large-scale assessment: Background, technical issues, and methods of data analysis. Boca Raton: Chapman & Hall/CRC Press. Rutkowski, L., & Zhou, Y. (2015). The impact of missing and error-prone auxiliary information on sparse-matrix sub-population parameter estimates. Methodology, 11(3), 89–99. Shoemaker, D. M. (1973). Principles and procedures of multiple matrix sampling. Oxford: Ballinger. von Davier, M., Sinharay, S., Oranje, A., & Beaton, A. (2006). The statistical procedures used in national assessment of educational progress: Recent developments and future directions. In C. R. Rao & S. Sinharay (Eds.), Handbook of statistics (pp. 1039–1055). Amsterdam: Elsevier. The concept for lsasim was fostered by LR and DR. THM carried out the development of lsasim. YL led the testing of lsasim. THM, LR and DR contributed to the writing of the manuscript. All authors read and approved the final manuscript. The authors acknowledge Dr. Eugenio Gonzalez for his feedback during development as well as the Norwegian Research Council for supporting this research. None of the authors have any competing interests that would be interpreted as influencing the research and ethical standards were followed in the conduct of lsasim and the writing of this manuscript. The package lsasim, including data used in this manuscript, can be found on Comprehensive R Archive Network, CRAN. This manuscript was partially funded by the Norwegian Research Council, FINNUT program, Grant 255246. Pearson, San Antonio, TX, USA Tyler H. Matta Centre for Educational Measurement, University of Oslo, Oslo, Norway , Leslie Rutkowski , David Rutkowski & Yuan-Ling Liaw Indiana University, Bloomington, IN, USA Leslie Rutkowski & David Rutkowski Search for Tyler H. Matta in: Search for Leslie Rutkowski in: Search for David Rutkowski in: Search for Yuan-Ling Liaw in: Correspondence to Tyler H. Matta. Matta, T.H., Rutkowski, L., Rutkowski, D. et al. lsasim: an R package for simulating large-scale assessment data. Large-scale Assess Educ 6, 15 (2018) doi:10.1186/s40536-018-0068-8
CommonCrawl
IMSL for C IMSL for Fortran IMSL for Java IMSL for Python Data Mining & Forecasting Resources Featured Links Resources Row Papers & Videos Support Featured Links Support Links Row About IMSL by Perforce Careers at Perforce Student Licenses Created with Avocode. Main Navigation - Mega Menu Advanced Analytics Functions for C/C++ Applications The Proven Standard for High-Performance Computing Advanced Statistical Algorithms and Functions for Java Applications Python Libraries for Machine Learning, Data Science & Data Analysis Getting Proactive With Predictive Analysis Explore Support Submit support requests and browse self-service resources. Methods for Multidimensional Scaling Part 2: General Criterion Function Example Algorithms & Functions By Roxy Cramer In this article, we look at a generalized criterion function example, including a closer look at data transformation, distance models, strata weights, parameters, and missing values via excerpts from the technical paper, "Methods for Multidimensional Scaling" by Douglas B. Clarkson, IMSL, Inc., 1987. General Criterion Function Example A generalized criterion function is given as $$ q = \sum_{h} \omega_h \sum_{i,j} |f(\tilde \delta_{ijm}) - a_h - b_h f(\delta_{ijm})|^p $$ \(\omega_h\) are "weights" which depend upon the \( f(\tilde \delta_{ijm}) \) in the \(h^{th}\) stratum; \(h\) indexes the strata; \(f\) is one of several transformations discussed in the Data transformation section below; \(a_h\) and \(b_h\) are parameters used in stratum \(h\) and discussed below; \(m\) indexes the dissimilarity matrix and depends upon \(h\) according to the stratification used; \(p\) allows for general LP (\(p^{th}\) power) estimation. The most likely values for \(p\) are 2.0 for least squares and 1.0 for least absolute value. Other parameters in the model are found in the distance models for \(\delta_{ijm}\) and for nonmetric scaling in the disparities \(\tilde \delta_{ijm}\). Null and Sarle (1982) have given a criterion function which uses \(p^{th}\) power estimation for ratio and interval data. The criterion \(q\) generalizes \(p^{th}\) power estimates to the nonmetric case. Criterion Function Data Transformation The function \(f\) transforms the disparities \(\tilde \delta\) and the distances \(\delta\), allowing the user to change assumptions about the distribution of the observed dissimilarities. This ability is most important in metric data, but \( f \) also has effects in nonmetric data, primarily through the scaling weights \(\omega_h\). Since least squares and normal theory maximum likelihood estimates are equivalent when there is a single stratum, for metric data the function \( f \) may be thought of as a transformation to normality. Alternatively, \(f\) may be used to stabilize the within-strata variances. Three commonly used choices for \(f \) are \(f(x) = x, f(x) = x^2,\) or \(f(x) = \ln x\), where \(x > 0\). For metric data, if squared distances have constant variance within each stratum, then \(f(x) = x^2\) should be used, with other transformations used in a similar manner as appropriate. The ALSCAL models (Takane, Young, and DeLeeuw 1977) use \(f(x) = x^2\) in the stress function, while \(f(x) = x\) is used in the MULTISCALE models (Ramsey 1983), and KYST (Kruskal, Young, and Seery 1973), among other models. The MULTISCALE models also allow the use of \(f(x) = \ln x\). The Distance Models These distance models are equivalent to those in ALSCAL (Takane, Young, and DeLeeuw 1977). • Euclidean model $$ \delta^2_{ijm} = \sum_{k=1}^{\tau} (X_{ik} - X_{jk})^2 $$ where \(i\) and \(j\) index the stimuli, and \(m\) indexes the matrix. • Individual-differences model $$ \delta^2_{ijm} = \sum_{k=1}^{\tau} W_{mk} (X_{ik} - X_{jk})^2 $$ where \(W_{mk}\) is the weight on the \(k^{th}\) dimension for the \(m^{th}\) matrix. • Stimulus weighted model $$ \delta^2_{ijm} = \sum_{k=1}^{\tau} S_{ik} (X_{ik} - X_{jk})^2 $$ where \(S_{ik}\) is the weight on the \(k^{th}\) dimension for the \(i_{th}\) stimulus. • Stimulus-weighted individual-differences model $$ \delta^2_{ijm} = \sum_{k=1}^{\tau} S_{ik}W_{mk} (X_{ik} - X_{jk})^2 $$ Of course, other distance models exist. For example, IDIOSCALE (Carroll and Chang 1970) allows a rotation of each individual's coordinate axis, while Weeks and Bentler (1982) propose some asymmetric models via skew-symmetric matrices. Scaling a Criterion Function With Strata Weights In metric data, strata weights are used to scale the criterion function within a stratum. Weights which are inversely proportional to the variances are preferred because they lead to normal distribution theory maximum-likelihood estimates. Weights inversely proportional to the variances (when \(p = 2\)) are given as $$ \omega^{-1}_{h} = \frac{ \sum |f(\tilde{\delta}_{ijm}) - a_h - b_hf(\hat{\delta}_{ijm})|^p}{n_h} $$ where \(\omega_h\) is the weight in the \(h^{th}\) stratum, the sum is over all observations in the stratum, and \(n_h\) is the number of observations in the stratum. When \(p \ne 2\), the resulting weights may still be optimal in some sense. The astute reader will note that with the choice for \(\omega_h\) above, the criterion function is the ratio of two proportional quantities and as such is the sum over \(h\) of the proportionality constant, \(n_h\). Since the estimation algorithm treats \(\omega_h\) as fixed during each iteration, the criterion function is still "optimized" during each iteration (in the sense that its first partial derivatives will converge to zero). The partial derivatives of \(q\) for fixed \(\omega_h\) are identical to those obtained from criterion $$ \tilde{q} = \sum_{h} n_h \ln \left [ \sum_i \sum_j |f(\tilde{\delta}_{ijm}) - a_h - b_h f(\hat{\delta}_{ijm})|^p \right ]$$ When weights inversely proportional to the strata variances are used, \(\tilde{q}\) is actually optimized. However, because of the equivalence in the derivatives, the result is the same as if one were to optimize \(q\) for fixed weights, where the fixed weights depend upon the final parameter estimates. In nonmetric scaling the criterion function is minimized with respect to both \(\delta\) and \(\tilde{\delta}\), and the weights are used as a scaling factor so that the solution will not degenerate to zero. In most criterion-multidimensional scaling models, scaling is provided by the use of one of two possible strata weights proposed by Kruskal (1964). These weights are given as $$ \omega_h^{-1} = \frac{|f(\tilde{\delta}_{ijm})|^p}{n_h}$$ $$ \omega_h^{-1} = \frac{|f(\tilde{\delta}_{ijm}) - \bar{f}(\tilde{\delta}_{...})|^p}{n_h}$$ where the sum is over the observations in the stratum, and where \(\bar{f}(\tilde{\delta}_{...})\) denotes the average of the disparities in the stratum. (Kruskal used \(p = 2\).) Both weighting schemes are allowed in \(q\) for metric as well as nonmetric data. Criterion Function Parameters \(a_h\) and \(b_h\) The meaning of parameters \(a_h\) and \(b_h\) varies depending on other aspects of the model. Since both \(a_h\) and \(b_h\) are redundant in nonmetric data, \(a_h\) is fixed at \(0\), while \(b_h\) is fixed at \(1\) in this case. For metric data the meaning and use of both \(a_h\) and \(b_h\) vary with the transformation \(f\) and with the data type as follows: If transformation \(f (x) = x\) is used, then the parameter \(a_h\) is a translation parameter used for interval data. In this case, the quantity \(\tilde{\delta}_{ijm} - a_h\) is assumed to be a distance (i.e., when \(\tilde{\delta}_{ijm} - a_h\) is zero, the objects are assumed to be in the same location, while other values of \(\tilde{\delta}_{ijm} - a_h\) are all positive). When the observed data are ratio, \(a_h\) is set to zero. For \(f(x) = x, b_h\) is a scaling factor allowing for different scalings of distance between strata. For example, some subjects may consistently give a smaller response than others. Whether \(b_h\) should be used when different strata scales are allowed depends upon the parameterization used for \( \delta \), since \(b_h\) may be redundant for some models. If \(f(x) = x^2\), then the model assumes in interval data that \(\tilde{\delta}^2 - a_h\) is a squared distance measure. Thus, squared distance plus a constant is assumed to be observed. As when \(f(x) = x, a_h\) should be set to \(0\) for ratio data. Once more, when \(f(x) = x^2\), the parameter \(b_h\) is a scaling parameter. When \(f(x) = \ln(x)\) the general meaning of \(a_h\) and \(b_h\) changes. For such data, \(\exp(a_h)\) is a scaling factor, while \(b_h\) is a power transformation parameter (the transformation can be rewritten as \(\ln(\delta^{b_h})\) . Of course, \(b_h\) should be fixed at \(1\) if a power transformation is not desired. Since no translation to zero distance is possible when \(f (x) = \ln(x)\), interval data are not handled in this case. Unique Parameter Estimates All models given are overparameterized so that the resulting parameter estimates are not uniquely defined. As was discussed for the Euclidean model, the columns of \(X\) can be translated. Moreover, in the Euclidean model, rotation is also possible. To eliminate lack of uniqueness due to translation, model estimates for the configuration can be centered in all models. No attempt at eliminating the rotation problem is usually made, but note that rotation invariance is not a problem in some of the models given. With more general models than the Euclidean model, other kinds of overparameterization occur. Further restrictions on the parameters to eliminate this overparameterization are given below by model transformation (\(f\)) type. In the following, \(W_{ik} \in W\) and \(S_{ik} \in S\), where \(W\) is the matrix of subject weights, and \(S\) is a matrix of stimulus weights. The restrictions to be applied by model transformation type are outlined below. 1. For all models (a) \(\sum_{i=1}^n x_{ik} = 0\); i.e., center the columns of \(X\). (b) If \(W\) is in the model, scale the columns of \(W\) so that \(\sum_{i=1}^n x^2_{ik} = 1\). (c) If \(S\) is in the model and \(W\) is not in the model, scale the columns of \(S\) so that \(\sum_{i=1}^n x^2_{ik} = 1\). (d) If both \(S\) and \(W\) are in the model, scale the columns of \(W\) so that \(\sum_{i=1}^n s^2_{ik} = 1\). 2. For \(f(x) = x\) and \(f(x) = x^2\) (a) Set \(b_h = 1\) if the data are matrix conditional and \(W\) is in the model, or if the data are unconditional. (In all cases, matrix conditional with only one matrix is considered the same as unconditional data.) (b) If the data are matrix conditional and \(W\) is not in the model, scale all elements in \(S\) (or \(X\) if \(S\) is not in the model) so that \(\sum_{h=1}^\gamma b^2_{h} = \gamma\), where \(\gamma\) is the number of matrices observed. (c) If the data are row conditional and \(W\) is in the model, then scale the rows of \(W\) so that \(\sum_{i=1}^n b^2_{h} = n\), where \(h\) corresponds to \(i\) for the appropriate matrix. (d) If the data are row conditional and \(W\) is not in the model, but \(S\) is in the model, then scale the rows of \(S\) so that \(\sum_{m=1}^n b^2_{h} = \gamma\), where \(h\) corresponds to \(m\) for the appropriate stimulus. (e) If the data are row conditional and neither \(W\) or \(S\) is in the model, then scale all elements in \(X\) so that \(\sum_{h} b^2_{h} = n\gamma\). 3. For \(f(x) = \ln(x)\) Substitute \(a_h\) for \(b_h\) (but set \(a_h\) to \(0\) instead of \(1\)) in all restrictions in item 2. Missing Values in Criterion Functions Missing data are usually handled by eliminating the missing data point from the stress function. If the amount of missing data is not too severe, then it is not likely that problems (e.g., overparameterization) will result. In computing initial estimates, the average of the data elements in the stratum are often substituted for the missing data, and the usual estimation procedure can then be used. If the data are row conditional and all elements in a stratum are missing, the average of the non-missing data values for the matrix is often used. In this article we looked at a generalized criterion function example, with a closer look at data transformation, distance models, strata weights, parameters, and missing values used therein. If you want to continue reading about the computational procedures for optimization and parameter estimation for generalized criterion functions, part three covers it in depth. If you want to get an overview of multidimensional scaling, we looked at the different methods used for multidimensional scaling in part one. Try IMSL on Your Project, Free With dedicated libraries for C, Fortran, Java, and Python, developers can plug in tested and dependable algorithms at minimal cost. Want to see for yourself? Try IMSL free today by clicking the button below. Try IMSL Free Roxy Cramer PhD statistician, Perforce Roxy Cramer is a PhD statistician for the IMSL Numerical Libraries at Perforce. Roxy specializes in applying and developing software for data science applications. Her current research interests include statistical methods and machine learning algorithms for distributed data. 3 Problems Solved With Numerical Algorithms in Finance Solving Transshipment and Assignment Problems Solving the Transportation Problem IMSL by Perforce © 2021 Perforce Software, Inc.
CommonCrawl
From formulasearchengine Revision as of 09:26, 2 December 2014 by en>Lemnaminor (Disambiguated: growth curves → growth curve (statistics)) Template:Regression bar In statistics, regression analysis is a statistical process for estimating the relationships among variables. It includes many techniques for modeling and analyzing several variables, when the focus is on the relationship between a dependent variable and one or more independent variables. More specifically, regression analysis helps one understand how the typical value of the dependent variable (or 'criterion variable') changes when any one of the independent variables is varied, while the other independent variables are held fixed. Most commonly, regression analysis estimates the conditional expectation of the dependent variable given the independent variables – that is, the average value of the dependent variable when the independent variables are fixed. Less commonly, the focus is on a quantile, or other location parameter of the conditional distribution of the dependent variable given the independent variables. In all cases, the estimation target is a function of the independent variables called the regression function. In regression analysis, it is also of interest to characterize the variation of the dependent variable around the regression function which can be described by a probability distribution. Regression analysis is widely used for prediction and forecasting, where its use has substantial overlap with the field of machine learning. Regression analysis is also used to understand which among the independent variables are related to the dependent variable, and to explore the forms of these relationships. In restricted circumstances, regression analysis can be used to infer causal relationships between the independent and dependent variables. However this can lead to illusions or false relationships, so caution is advisable;[1] for example, correlation does not imply causation. Many techniques for carrying out regression analysis have been developed. Familiar methods such as linear regression and ordinary least squares regression are parametric, in that the regression function is defined in terms of a finite number of unknown parameters that are estimated from the data. Nonparametric regression refers to techniques that allow the regression function to lie in a specified set of functions, which may be infinite-dimensional. The performance of regression analysis methods in practice depends on the form of the data generating process, and how it relates to the regression approach being used. Since the true form of the data-generating process is generally not known, regression analysis often depends to some extent on making assumptions about this process. These assumptions are sometimes testable if a sufficient quantity of data is available. Regression models for prediction are often useful even when the assumptions are moderately violated, although they may not perform optimally. However, in many applications, especially with small effects or questions of causality based on observational data, regression methods can give misleading results.[2][3] 2 Regression models 2.1 Necessary number of independent measurements 2.2 Statistical assumptions 3 Underlying assumptions 4 Linear regression 4.1 General linear model 4.2 Diagnostics 4.3 "Limited dependent" variables 5 Interpolation and extrapolation 6 Nonlinear regression 7 Power and sample size calculations 8 Other methods The earliest form of regression was the method of least squares, which was published by Legendre in 1805,[4] and by Gauss in 1809.[5] Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the Sun (mostly comets, but also later the then newly discovered minor planets). Gauss published a further development of the theory of least squares in 1821,[6] including a version of the Gauss–Markov theorem. The term "regression" was coined by Francis Galton in the nineteenth century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean).[7][8] For Galton, regression had only this biological meaning,[9][10] but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context.[11][12] In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925.[13][14][15] Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821. In the 1950s and 1960s, economists used electromechanical desk calculators to calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.[16] Regression methods continue to be an area of active research. In recent decades, new methods have been developed for robust regression, regression involving correlated responses such as time series and growth curves, regression in which the predictor or response variables are curves, images, graphs, or other complex data objects, regression methods accommodating various types of missing data, nonparametric regression, Bayesian methods for regression, regression in which the predictor variables are measured with error, regression with more predictor variables than observations, and causal inference with regression. Regression models Regression models involve the following variables: The unknown parameters, denoted as β, which may represent a scalar or a vector. The independent variables, X. The dependent variable, Y. In various fields of application, different terminologies are used in place of dependent and independent variables. A regression model relates Y to a function of X and β. Y≈f⁡(X,β){\displaystyle Y\approx f(\mathbf {X} ,{\boldsymbol {\beta }})} The approximation is usually formalized as E(Y | X) = f(X, β). To carry out regression analysis, the form of the function f must be specified. Sometimes the form of this function is based on knowledge about the relationship between Y and X that does not rely on the data. If no such knowledge is available, a flexible or convenient form for f is chosen. Assume now that the vector of unknown parameters β is of length k. In order to perform a regression analysis the user must provide information about the dependent variable Y: If N data points of the form (Y, X) are observed, where N < k, most classical approaches to regression analysis cannot be performed: since the system of equations defining the regression model is underdetermined, there are not enough data to recover β. If exactly N = k data points are observed, and the function f is linear, the equations Y = f(X, β) can be solved exactly rather than approximately. This reduces to solving a set of N equations with N unknowns (the elements of β), which has a unique solution as long as the X are linearly independent. If f is nonlinear, a solution may not exist, or many solutions may exist. The most common situation is where N > k data points are observed. In this case, there is enough information in the data to estimate a unique value for β that best fits the data in some sense, and the regression model when applied to the data can be viewed as an overdetermined system in β. In the last case, the regression analysis provides the tools for: Finding a solution for unknown parameters β that will, for example, minimize the distance between the measured and predicted values of the dependent variable Y (also known as method of least squares). Under certain statistical assumptions, the regression analysis uses the surplus of information to provide statistical information about the unknown parameters β and predicted values of the dependent variable Y. Necessary number of independent measurements Consider a regression model which has three unknown parameters, β0, β1, and β2. Suppose an experimenter performs 10 measurements all at exactly the same value of independent variable vector X (which contains the independent variables X1, X2, and X3). In this case, regression analysis fails to give a unique set of estimated values for the three unknown parameters; the experimenter did not provide enough information. The best one can do is to estimate the average value and the standard deviation of the dependent variable Y. Similarly, measuring at two different values of X would give enough data for a regression with two unknowns, but not for three or more unknowns. If the experimenter had performed measurements at three different values of the independent variable vector X, then regression analysis would provide a unique set of estimates for the three unknown parameters in β. In the case of general linear regression, the above statement is equivalent to the requirement that the matrix XTX is invertible. Statistical assumptions When the number of measurements, N, is larger than the number of unknown parameters, k, and the measurement errors εi are normally distributed then the excess of information contained in (N − k) measurements is used to make statistical predictions about the unknown parameters. This excess of information is referred to as the degrees of freedom of the regression. Underlying assumptions Classical assumptions for regression analysis include: The sample is representative of the population for the inference prediction. The error is a random variable with a mean of zero conditional on the explanatory variables. The independent variables are measured with no error. (Note: If this is not so, modeling may be done instead using errors-in-variables model techniques). The predictors are linearly independent, i.e. it is not possible to express any predictor as a linear combination of the others. The errors are uncorrelated, that is, the variance–covariance matrix of the errors is diagonal and each non-zero element is the variance of the error. The variance of the error is constant across observations (homoscedasticity). If not, weighted least squares or other methods might instead be used. These are sufficient conditions for the least-squares estimator to possess desirable properties; in particular, these assumptions imply that the parameter estimates will be unbiased, consistent, and efficient in the class of linear unbiased estimators. It is important to note that actual data rarely satisfies the assumptions. That is, the method is used even though the assumptions are not true. Variation from the assumptions can sometimes be used as a measure of how far the model is from being useful. Many of these assumptions may be relaxed in more advanced treatments. Reports of statistical analyses usually include analyses of tests on the sample data and methodology for the fit and usefulness of the model. Assumptions include the geometrical support of the variables.[17]Template:Clarify Independent and dependent variables often refer to values measured at point locations. There may be spatial trends and spatial autocorrelation in the variables that violate statistical assumptions of regression. Geographic weighted regression is one technique to deal with such data.[18] Also, variables may include values aggregated by areas. With aggregated data the modifiable areal unit problem can cause extreme variation in regression parameters.[19] When analyzing data aggregated by political boundaries, postal codes or census areas results may be very distinct with a different choice of units. {{#invoke:main|main}} {{#invoke:Hatnote|hatnote}} In linear regression, the model specification is that the dependent variable, yi{\displaystyle y_{i}} is a linear combination of the parameters (but need not be linear in the independent variables). For example, in simple linear regression for modeling n{\displaystyle n} data points there is one independent variable: xi{\displaystyle x_{i}} , and two parameters, β0{\displaystyle \beta _{0}} and β1{\displaystyle \beta _{1}} : straight line: yi=β0+β1⁢xi+εi,i=1,…,n.{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i}+\varepsilon _{i},\quad i=1,\dots ,n.\!} In multiple linear regression, there are several independent variables or functions of independent variables. Adding a term in xi2 to the preceding regression gives: parabola: yi=β0+β1⁢xi+β2⁢xi2+εi,i=1,…,n.{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i}+\beta _{2}x_{i}^{2}+\varepsilon _{i},\ i=1,\dots ,n.\!} This is still linear regression; although the expression on the right hand side is quadratic in the independent variable xi{\displaystyle x_{i}} , it is linear in the parameters β0{\displaystyle \beta _{0}} , β1{\displaystyle \beta _{1}} and β2.{\displaystyle \beta _{2}.} In both cases, εi{\displaystyle \varepsilon _{i}} is an error term and the subscript i{\displaystyle i} indexes a particular observation. Given a random sample from the population, we estimate the population parameters and obtain the sample linear regression model: yi^=β^0+β^1⁢xi.{\displaystyle {\widehat {y_{i}}}={\widehat {\beta }}_{0}+{\widehat {\beta }}_{1}x_{i}.} The residual, ei=yi−y^i{\displaystyle e_{i}=y_{i}-{\widehat {y}}_{i}} , is the difference between the value of the dependent variable predicted by the model, yi^{\displaystyle {\widehat {y_{i}}}} , and the true value of the dependent variable, yi{\displaystyle y_{i}} . One method of estimation is ordinary least squares. This method obtains parameter estimates that minimize the sum of squared residuals, SSE,[20][21] also sometimes denoted RSS: S⁢S⁢E=∑i=1nei2.{\displaystyle SSE=\sum _{i=1}^{n}e_{i}^{2}.\,} Minimization of this function results in a set of normal equations, a set of simultaneous linear equations in the parameters, which are solved to yield the parameter estimators, β^0,β^1{\displaystyle {\widehat {\beta }}_{0},{\widehat {\beta }}_{1}} . Illustration of linear regression on a data set. In the case of simple regression, the formulas for the least squares estimates are β1^=∑(xi−x¯)⁢(yi−y¯)∑(xi−x¯)2 and β0^=y¯−β1^⁢x¯{\displaystyle {\widehat {\beta _{1}}}={\frac {\sum (x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{\sum (x_{i}-{\bar {x}})^{2}}}{\text{ and }}{\hat {\beta _{0}}}={\bar {y}}-{\widehat {\beta _{1}}}{\bar {x}}} where x¯{\displaystyle {\bar {x}}} is the mean (average) of the x{\displaystyle x} values and y¯{\displaystyle {\bar {y}}} is the mean of the y{\displaystyle y} values. Under the assumption that the population error term has a constant variance, the estimate of that variance is given by: σ^ε2=S⁢S⁢En−2.{\displaystyle {\hat {\sigma }}_{\varepsilon }^{2}={\frac {SSE}{n-2}}.\,} This is called the mean square error (MSE) of the regression. The denominator is the sample size reduced by the number of model parameters estimated from the same data, (n-p) for p regressors or (n-p-1) if an intercept is used.[22] In this case, p=1 so the denominator is n-2. The standard errors of the parameter estimates are given by σ^β0=σ^ε⁢1n+x¯2∑(xi−x¯)2{\displaystyle {\hat {\sigma }}_{\beta _{0}}={\hat {\sigma }}_{\varepsilon }{\sqrt {{\frac {1}{n}}+{\frac {{\bar {x}}^{2}}{\sum (x_{i}-{\bar {x}})^{2}}}}}} σ^β1=σ^ε⁢1∑(xi−x¯)2.{\displaystyle {\hat {\sigma }}_{\beta _{1}}={\hat {\sigma }}_{\varepsilon }{\sqrt {\frac {1}{\sum (x_{i}-{\bar {x}})^{2}}}}.} Under the further assumption that the population error term is normally distributed, the researcher can use these estimated standard errors to create confidence intervals and conduct hypothesis tests about the population parameters. General linear model {{#invoke:Hatnote|hatnote}} {{#invoke:Hatnote|hatnote}} In the more general multiple regression model, there are p independent variables: yi=β1⁢xi⁢1+β2⁢xi⁢2+⋯+βp⁢xi⁢p+εi,{\displaystyle y_{i}=\beta _{1}x_{i1}+\beta _{2}x_{i2}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i},\,} where xij is the ith observation on the jth independent variable, and where the first independent variable takes the value 1 for all i (so β1{\displaystyle \beta _{1}} is the regression intercept). The least squares parameter estimates are obtained from p normal equations. The residual can be written as εi=yi−β^1⁢xi⁢1−⋯−β^p⁢xi⁢p.{\displaystyle \varepsilon _{i}=y_{i}-{\hat {\beta }}_{1}x_{i1}-\cdots -{\hat {\beta }}_{p}x_{ip}.} The normal equations are ∑i=1n∑k=1pXi⁢j⁢Xi⁢k⁢β^k=∑i=1nXi⁢j⁢yi,j=1,…,p.{\displaystyle \sum _{i=1}^{n}\sum _{k=1}^{p}X_{ij}X_{ik}{\hat {\beta }}_{k}=\sum _{i=1}^{n}X_{ij}y_{i},\ j=1,\dots ,p.\,} In matrix notation, the normal equations are written as (X⊤⁢X)⁢β^=X⊤⁢Y,{\displaystyle \mathbf {(X^{\top }X){\hat {\boldsymbol {\beta }}}={}X^{\top }Y} ,\,} where the ij element of X is xij, the i element of the column vector Y is yi, and the j element of β^{\displaystyle {\hat {\beta }}} is β^j{\displaystyle {\hat {\beta }}_{j}} . Thus X is n×p, Y is n×1, and β^{\displaystyle {\hat {\beta }}} is p×1. The solution is β^=(X⊤⁢X)−1⁢X⊤⁢Y.{\displaystyle \mathbf {{\hat {\boldsymbol {\beta }}}={}(X^{\top }X)^{-1}X^{\top }Y} .\,} Template:Category see also Once a regression model has been constructed, it may be important to confirm the goodness of fit of the model and the statistical significance of the estimated parameters. Commonly used checks of goodness of fit include the R-squared, analyses of the pattern of residuals and hypothesis testing. Statistical significance can be checked by an F-test of the overall fit, followed by t-tests of individual parameters. Interpretations of these diagnostic tests rest heavily on the model assumptions. Although examination of the residuals can be used to invalidate a model, the results of a t-test or F-test are sometimes more difficult to interpret if the model's assumptions are violated. For example, if the error term does not have a normal distribution, in small samples the estimated parameters will not follow normal distributions and complicate inference. With relatively large samples, however, a central limit theorem can be invoked such that hypothesis testing may proceed using asymptotic approximations. "Limited dependent" variables The phrase "limited dependent" is used in econometric statistics for categorical and constrained variables. The response variable may be non-continuous ("limited" to lie on some subset of the real line). For binary (zero or one) variables, if analysis proceeds with least-squares linear regression, the model is called the linear probability model. Nonlinear models for binary dependent variables include the probit and logit model. The multivariate probit model is a standard method of estimating a joint relationship between several binary dependent variables and some independent variables. For categorical variables with more than two values there is the multinomial logit. For ordinal variables with more than two values, there are the ordered logit and ordered probit models. Censored regression models may be used when the dependent variable is only sometimes observed, and Heckman correction type models may be used when the sample is not randomly selected from the population of interest. An alternative to such procedures is linear regression based on polychoric correlation (or polyserial correlations) between the categorical variables. Such procedures differ in the assumptions made about the distribution of the variables in the population. If the variable is positive with low values and represents the repetition of the occurrence of an event, then count models like the Poisson regression or the negative binomial model may be used instead. Interpolation and extrapolation Regression models predict a value of the Y variable given known values of the X variables. Prediction within the range of values in the dataset used for model-fitting is known informally as interpolation. Prediction outside this range of the data is known as extrapolation. Performing extrapolation relies strongly on the regression assumptions. The further the extrapolation goes outside the data, the more room there is for the model to fail due to differences between the assumptions and the sample data or the true values. It is generally advised {{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} that when performing extrapolation, one should accompany the estimated value of the dependent variable with a prediction interval that represents the uncertainty. Such intervals tend to expand rapidly as the values of the independent variable(s) moved outside the range covered by the observed data. For such reasons and others, some tend to say that it might be unwise to undertake extrapolation.[23] However, this does not cover the full set of modelling errors that may be being made: in particular, the assumption of a particular form for the relation between Y and X. A properly conducted regression analysis will include an assessment of how well the assumed form is matched by the observed data, but it can only do so within the range of values of the independent variables actually available. This means that any extrapolation is particularly reliant on the assumptions being made about the structural form of the regression relationship. Best-practice advice here{{ safesubst:#invoke:Unsubst||date=__DATE__ |$B= {{#invoke:Category handler|main}}{{#invoke:Category handler|main}}[citation needed] }} is that a linear-in-variables and linear-in-parameters relationship should not be chosen simply for computational convenience, but that all available knowledge should be deployed in constructing a regression model. If this knowledge includes the fact that the dependent variable cannot go outside a certain range of values, this can be made use of in selecting the model – even if the observed dataset has no values particularly near such bounds. The implications of this step of choosing an appropriate functional form for the regression can be great when extrapolation is considered. At a minimum, it can ensure that any extrapolation arising from a fitted model is "realistic" (or in accord with what is known). Nonlinear regression {{#invoke:main|main}} When the model function is not linear in the parameters, the sum of squares must be minimized by an iterative procedure. This introduces many complications which are summarized in Differences between linear and non-linear least squares Power and sample size calculations There are no generally agreed methods for relating the number of observations versus the number of independent variables in the model. One rule of thumb suggested by Good and Hardin is N=mn{\displaystyle N=m^{n}} , where N{\displaystyle N} is the sample size, n{\displaystyle n} is the number of independent variables and m{\displaystyle m} is the number of observations needed to reach the desired precision if the model had only one independent variable.[24] For example, a researcher is building a linear regression model using a dataset that contains 1000 patients (N{\displaystyle N} ). If the researcher decides that five observations are needed to precisely define a straight line (m{\displaystyle m} ), then the maximum number of independent variables the model can support is 4, because log⁡1000log⁡5=4.29{\displaystyle {\frac {\log {1000}}{\log {5}}}=4.29} . Although the parameters of a regression model are usually estimated using the method of least squares, other methods which have been used include: Bayesian methods, e.g. Bayesian linear regression Percentage regression, for situations where reducing percentage errors is deemed more appropriate.[25] Least absolute deviations, which is more robust in the presence of outliers, leading to quantile regression Nonparametric regression, requires a large number of observations and is computationally intensive Distance metric learning, which is learned by the search of a meaningful distance metric in a given input space.[26] {{#invoke:main|main}} All major statistical software packages perform least squares regression analysis and inference. Simple linear regression and multiple regression using least squares can be done in some spreadsheet applications and on some calculators. While many statistical software packages can perform various types of nonparametric and robust regression, these methods are less standardized; different software packages implement different methods, and a method with a given name may be implemented differently in different packages. Specialized regression software has been developed for use in fields such as survey analysis and neuroimaging. {{#invoke:Portal|portal}} Template:Colbegin Fraction of variance unexplained Kriging (a linear least squares estimation algorithm) Local regression Modifiable areal unit problem Multivariate adaptive regression splines Multivariate normal distribution Pearson product-moment correlation coefficient Prediction interval Robust regression Segmented regression Stepwise regression Trend estimation Template:Colend ↑ {{#invoke:Citation/CS1|citation |CitationClass=journal }} ↑ David A. Freedman, Statistical Models: Theory and Practice, Cambridge University Press (2005) ↑ R. Dennis Cook; Sanford Weisberg Criticism and Influence Analysis in Regression, Sociological Methodology, Vol. 13. (1982), pp. 313–361 ↑ A.M. Legendre. Nouvelles méthodes pour la détermination des orbites des comètes, Firmin Didot, Paris, 1805. "Sur la Méthode des moindres quarrés" appears as an appendix. ↑ C.F. Gauss. Theoria Motus Corporum Coelestium in Sectionibus Conicis Solem Ambientum. (1809) ↑ C.F. Gauss. Theoria combinationis observationum erroribus minimis obnoxiae. (1821/1823) ↑ {{#invoke:citation/CS1|citation |CitationClass=book }} ↑ Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492–495, 512–514, 532–533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.) ↑ Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.) ↑ Rodney Ramcharan. Regressions: Why Are Economists Obessessed with Them? March 2006. Accessed 2011-12-03. ↑ N. Cressie (1996) Change of Support and the Modiable Areal Unit Problem. Geographical Systems 3:159–180. ↑ M. H. Kutner, C. J. Nachtsheim, and J. Neter (2004), "Applied Linear Regression Models", 4th ed., McGraw-Hill/Irwin, Boston (p. 25) ↑ N. Ravishankar and D. K. Dey (2002), "A First Course in Linear Model Theory", Chapman and Hall/CRC, Boca Raton (p. 101) ↑ Steel, R.G.D, and Torrie, J. H., Principles and Procedures of Statistics with Special Reference to the Biological Sciences., McGraw Hill, 1960, page 288. ↑ Chiang, C.L, (2003) Statistical methods of analysis, World Scientific. ISBN 981-238-310-7 - page 274 section 9.7.4 "interpolation vs extrapolation" William H. Kruskal and Judith M. Tanur, ed. (1978), "Linear Hypotheses," International Encyclopedia of Statistics. Free Press, v. 1, Evan J. Williams, "I. Regression," pp. 523–41. Julian C. Stanley, "II. Analysis of Variance," pp. 541–554. Lindley, D.V. (1987). "Regression and correlation analysis," New Palgrave: A Dictionary of Economics, v. 4, pp. 120–23. Birkes, David and Dodge, Y., Alternative Methods of Regression. ISBN 0-471-56881-3 Chatfield, C. (1993) "Calculating Interval Forecasts," Journal of Business and Economic Statistics, 11. pp. 121–135. {{#invoke:citation/CS1|citation |CitationClass=book }} Fox, J. (1997). Applied Regression Analysis, Linear Models and Related Methods. Sage Hardle, W., Applied Nonparametric Regression (1990), ISBN 0-521-42950-1 Meade, N. and T. Islam (1995) "Prediction Intervals for Growth Curve Forecasts" Journal of Forecasting, 14, pp. 413–430. A. Sen, M. Srivastava, Regression Analysis — Theory, Methods, and Applications, Springer-Verlag, Berlin, 2011 (4th printing). T. Strutz: Data Fitting and Uncertainty (A practical introduction to weighted least squares and beyond). Vieweg+Teubner, ISBN 978-3-8348-1022-9. Malakooti, B. (2013). Operations and Production Systems with Multiple Objectives. John Wiley & Sons. Template:Sister |CitationClass=citation }} Earliest Uses: Regression – basic history and references Regression of Weakly Correlated Data – how linear regression mistakes can appear when Y-range is much smaller than X-range Template:Least squares and regression analysis Template:Statistics Retrieved from "https://en.formulasearchengine.com/index.php?title=Regression_analysis&oldid=234478" Commons category with local link different than on Wikidata
CommonCrawl
RELEVANCE VIEWS TITLE DATE ARCHIVED DATE PUBLISHED DATE REVIEWED DATE ADDED CREATOR Critical Conductance of a Mesoscopic System: Interplay of the Spectral and Eigenfunction Correlations at the Metal-Insulator Transition by D. G. Polyakov We study the system-size dependence of the averaged critical conductance $g(L)$ at the Anderson transition. We have: (i) related the correction $\delta g(L)=g(\infty)-g(L)\propto L^{-y}$ to the spectral correlations; (ii) expressed $\delta g(L)$ in terms of the quantum return probability; (iii) argued that $y=\eta$ -- the critical exponent of eigenfunction correlations. Experimental implications are discussed. Source: http://arxiv.org/abs/cond-mat/9804259v2 Quantum Hall Effect at Finite Temperatures Recent work on the temperature-driven delocalization in the quantum Hall regime is reviewed, with emphasis on the role of electron-electron interactions and the correlation properties of disorder. We have stressed (i) the crucial role of the Coulomb interaction in the integer quantum Hall effect; (ii) the classical aspects of electron dynamics in samples with long-range disorder. Spin-flip scattering in the quantum Hall regime We present a microscopic theory of spin-orbit coupling in the integer quantum Hall regime. The spin-orbit scattering length is evaluated in the limit of long-range random potential. The spin-flip rate is shown to be determined by rare fluctuations of anomalously high electric field. A mechanism of strong spin-orbit scattering associated with exchange-induced spontaneous spin-polarization is suggested. Scaling of the spin-splitting of the delocalization transition with the strength of spin-orbit... Universal Prefactor of Activated Conductivity in the Quantum Hall Effect by D. G. Polyakov; B. I. Shklovskii The prefactor of the activated dissipative conductivity in a plateau range of the quantum Hall effect is studied in the case of a long-range random potential. It is shown that due to long time it takes for an electron to drift along the perimeter of a large percolation cluster, phonons are able to maintain quasi-equilibrium inside the cluster. The saddle points separating such clusters may then be viewed as ballistic point contacts between electron reservoirs with different electrochemical... Dynamical scaling at the quantum Hall transition: Coulomb blockade versus phase breaking by D. G. Polyakov; K. V. Samokhin We argue that the finite temperature dynamics of the integer quantum Hall system is governed by two independent length scales. The consistent scaling description of the transition makes crucial use of two temperature critical exponents, reflecting the interplay between charging effects and interaction-induced dephasing. Experimental implications of the two-scale picture are discussed. Quantum Hall effect in spin-degenerate Landau levels: Spin-orbit enhancement of the conductivity by D. G. Polyakov; M. E. Raikh The quantum Hall regime in a smooth random potential is considered when two disorder-broadened Zeeman levels overlap strongly. Spin-orbit coupling is found to cause a drastic change in the percolation network which leads to a strong enhancement of the dissipative conductivity at finite temperature, provided the Fermi level $E_F$ lies between the energies of two delocalized states $E=\pm\Delta$, $2\Delta$ being the Zeeman splitting. The conductivity is shown to exhibit a box-like behavior with... Transport of interacting electrons through a double barrier in quantum wires by D. G. Polyakov; I. V. Gornyi We generalize the fermionic renormalization group method to describe analytically transport through a double barrier structure in a one-dimensional system. Focusing on the case of weakly interacting electrons, we investigate thoroughly the dependence of the conductance on the strength and the shape of the double barrier for arbitrary temperature T. Our approach allows us to systematically analyze the contributions to renormalized scattering amplitudes from different characteristic scales absent... Composite fermions in a long-range random magnetic field: Quantum Hall effect versus Shubnikov-de Haas oscillations by A. D. Mirlin; D. G. Polyakov; P. Woelfle We study transport in a smooth random magnetic field, with emphasis on composite fermions (CF) near half-filling of the Landau level. When either the amplitude of the magnetic field fluctuations or its mean value $\bar B$ is large enough, the transport is of percolating nature. While at $\bar{B}=0$ the percolation effects enhance the conductivity $\sigma_{xx}$, increasing $\bar B$ (which corresponds to moving away from half-filling for the CF problem) leads to a sharp falloff of $\sigma_{xx}$... Cyclotron resonance in antidot arrays by D. G. Polyakov; F. Evers; I. V. Gornyi We study the dynamical properties of an electron gas scattered by impenetrable antidots in the presence of a strong magnetic field. We find that the lineshape of the cyclotron resonance is very different from the Lorentzian and is not characterized by the Drude scattering rate. We show that the dissipative dynamical response of skipping orbits, $S_c(\omega)$, is broadened on a scale of the cyclotron frequency $\omega_c$ and has a sharp dip $\propto |\omega-\omega_c|$. For small antidots,... Ultranarrow resonance in Coulomb drag between quantum wires at coinciding densities by A. P. Dmitriev; I. V. Gornyi; D. G. Polyakov We investigate the influence of the chemical potential mismatch $\Delta$ (different electron densities) on Coulomb drag between two parallel ballistic quantum wires. For pair collisions, the drag resistivity $\rho_{\rm D}(\Delta)$ shows a peculiar anomaly at $\Delta=0$ with $\rho_{\rm D}$ being finite at $\Delta=0$ and vanishing at any nonzero $\Delta$. The "bodyless" resonance in $\rho_{\rm D}(\Delta)$ at zero $\Delta$ is only broadened by processes of multi-particle scattering. We... Topics: Strongly Correlated Electrons, Mesoscale and Nanoscale Physics, Condensed Matter Source: http://arxiv.org/abs/1512.00373 Theory of the fractional microwave-induced resistance oscillations by I. A. Dmitriev; A. D. Mirlin; D. G. Polyakov We develop a systematic theory of microwave-induced oscillations in magnetoresistivity of a 2D electron gas in the vicinity of fractional harmonics of the cyclotron resonance, observed in recent experiments. We show that in the limit of well-separated Landau levels the effect is dominated by a change of the distribution function induced by multiphoton processes. At moderate magnetic field, a single-photon mechanism originating from the microwave-induced sidebands in the density of states of... Source: http://arxiv.org/abs/0707.0990v2 Cyclotron resonance harmonics in the ac response of a 2D electron gas with smooth disorder The frequency-dependent conductivity $\sigma_{xx}(\omega)$ of 2D electrons subjected to a transverse magnetic field and smooth disorder is calculated. The interplay of Landau quantization and disorder scattering gives rise to an oscillatory structure that survives in the high-temperature limit. The relation to recent experiments on photoconductivity by Zudov {\it et al.} and Mani {\it et al.} is discussed. Many-body delocalization transition and relaxation in a quantum dot by I. V. Gornyi; A. D. Mirlin; D. G. Polyakov We revisit the problem of quantum localization of many-body states in a quantum dot and the associated problem of relaxation of an excited state in a finite correlated electron system. We determine the localization threshold for the eigenstates in Fock space. We argue that the localization-delocalization transition (which manifests itself, e.g., in the statistics of many-body energy levels) becomes sharp in the limit of a large dimensionless conductance (or, equivalently, in the limit of weak... Coulomb drag between ballistic quantum wires We develop a kinetic equation description of Coulomb drag between ballistic one-dimensional electron systems, which enables us to demonstrate that equilibration processes between right- and left-moving electrons are crucially important for establishing dc drag. In one-dimensional geometry, this type of equilibration requires either backscattering near the Fermi level or scattering with small momentum transfer near the bottom of the electron spectrum. Importantly, pairwise forward scattering in... Interacting electrons in disordered wires: Anderson localization and low-temperature transport We study transport of interacting electrons in a low-dimensional disordered system at low temperature $T$. In view of localization by disorder, the conductivity $\sigma(T)$ may only be non-zero due to electron-electron scattering. For weak interactions, the weak-localization regime crosses over with lowering $T$ into a dephasing-induced "power-law hopping". As $T$ is further decreased, the Anderson localization in Fock space crucially affects $\sigma(T)$, inducing a transition at... Nonequilibrium kinetics of a disordered Luttinger liquid by D. A. Bagrets; I. V. Gornyi; D. G. Polyakov We develop a kinetic theory for strongly correlated disordered one-dimensional electron systems out of equilibrium, within the Luttinger liquid model. In the absence of inhomogeneities, the model exhibits no relaxation to equilibrium. We derive kinetic equations for electron and plasmon distribution functions in the presence of impurities and calculate the equilibration rate $\gamma_E$. Remarkably, for not too low temperature and bias voltage, $\gamma_E$ is given by the elastic backscattering... Electron transport in disordered Luttinger liquid We study the transport properties of interacting electrons in a disordered quantum wire within the framework of the Luttinger liquid model. We demonstrate that the notion of weak localization is applicable to the strongly correlated one-dimensional electron system. Two alternative approaches to the problem are developed, both combining fermionic and bosonic treatment of the underlying physics. We calculate the relevant dephasing rate, which for spinless electrons is governed by the interplay of... Transport of charge-density waves in the presence of disorder: Classical pinning vs quantum localization by A. D. Mirlin; D. G. Polyakov; V. M. Vinokur We consider the interplay of the elastic pinning and the Anderson localization in the transport properties of a charge-density wave in one dimension, within the framework of the Luttinger model in the limit of strong repulsion. We address a conceptually important issue of which of the two disorder-induced phenomena limits the mobility more effectively. We argue that the interplay of the classical and quantum effects in transport of a very rigid charge-density wave is quite nontrivial: the... Oscillatory ac- and photoconductivity of a 2D electron gas: Quasiclassical transport beyond the Boltzmann equation We have analyzed the quasiclassical mechanism of magnetooscillations in the ac- and photoconductivity, related to non-Markovian dynamics of disorder-induced electron scattering. While the magnetooscillations in the photoconductivity are found to be weak, the effect manifests itself much more strongly in the ac conductivity, where it may easily dominate over the oscillations due to the Landau quantization. We argue that the damping of the oscillatory photoconductivity provides a reliable method... Microwave photoconductivity of a 2D electron gas: Mechanisms and their interplay at high radiation power We develop a systematic theory of microwave-induced oscillations in the magnetoresistivity of a two-dimensional electron gas, focusing on the regime of strongly overlapping Landau levels. At linear order in microwave power, two novel mechanisms of the oscillations (``quadrupole'' and ``photovoltaic'') are identified, in addition to those studied before (``displacement'' and ``inelastic''). The quadrupole and photovoltaic mechanisms are shown to be the only ones that give rise to oscillations in... Dephasing and weak localization in disordered Luttinger liquid We study the transport properties of interacting electrons in a disordered quantum wire within the framework of the Luttinger liquid model. The conductivity at finite temperature is nonzero only because of inelastic electron-electron scattering. We demonstrate that the notion of weak localization is applicable to the strongly correlated one-dimensional electron system. We calculate the relevant dephasing rate, which for spinless electrons is governed by the interplay of electron-electron... Fractional microwave-induced resistance oscillations We develop a systematic theory of microwave-induced oscillations in magnetoresistivity of a 2D electron gas in the vicinity of fractional harmonics of the cyclotron resonance, observed in recent experiments. We show that in the limit of well-separated Landau levels the effect is dominated by the multiphoton inelastic mechanism. At moderate magnetic field, two single-photon mechanisms become important. One of them is due to resonant series of multiple single-photon transitions, while the other... Quasiclassical magnetotransport in a random array of antidots by D. G. Polyakov; F. Evers; A. D. Mirlin; P. Woelfle We study theoretically the magnetoresistance $\rho_{xx}(B)$ of a two-dimensional electron gas scattered by a random ensemble of impenetrable discs in the presence of a long-range correlated random potential. We believe that this model describes a high-mobility semiconductor heterostructure with a random array of antidots. We show that the interplay of scattering by the two types of disorder generates new behavior of $\rho_{xx}(B)$ which is absent for only one kind of disorder. We demonstrate... Coulomb drag between helical Luttinger liquids by N. Kainaris; I. V. Gornyi; A. Levchenko; D. G. Polyakov We theoretically study Coulomb drag between two helical edges with broken spin-rotational symmetry, such as would occur in two capacitively coupled quantum spin Hall insulators. For the helical edges, Coulomb drag is particularly interesting because it specifically probes the inelastic interactions that break the conductance quantization for a single edge. Using the kinetic equation formalism, supplemented by bosonization, we find that the drag resistivity $\rho_D$ exhibits a nonmonotonic... Topics: Mesoscale and Nanoscale Physics, Condensed Matter, Strongly Correlated Electrons Quasiclassical negative magnetoresistance of a 2D electron gas: interplay of strong scatterers and smooth disorder by A. D. Mirlin; D. G. Polyakov; F. Evers; P. Woelfle We study the quasiclassical magnetotransport of non-interacting fermions in two dimensions moving in a random array of strong scatterers (antidots, impurities or defects) on the background of a smooth random potential. We demonstrate that the combination of the two types of disorder induces a novel mechanism leading to a strong negative magnetoresistance, followed by the saturation of the magnetoresistivity $\rho_{xx}(B)$ at a value determined solely by the smooth disorder. Experimental... Semiclassical theory of transport in a random magnetic field by F. Evers; A. D. Mirlin; D. G. Polyakov; P. Woelfle We study the semiclassical kinetics of 2D fermions in a smoothly varying magnetic field $B({\bf r})$. The nature of the transport depends crucially on both the strength $B_0$ of the random component of $B({\bf r})$ and its mean value $\bar{B}$. For $\bar{B}=0$, the governing parameter is $\alpha=d/R_0$, where $d$ is the correlation length of disorder and $R_0$ is the Larmor radius in the field $B_0$. While for $\alpha\ll 1$ the Drude theory applies, at $\alpha\gg 1$ most particles drift... Non-adiabatic scattering of a classical particle in an inhomogeneous magnetic field We study the violation of the adiabaticity of the electron dynamics in a slowly varying magnetic field. We formulate and solve exactly a non-adiabatic scattering problem. In particular, we consider scattering on a magnetic field inhomogeneity which models scatterers in the composite-fermion theory of the half-filled Landau level. The calculated non-adiabatic shift of the guiding center is exponentially small and exhibits an oscillatory behavior related to the "self-commensurability"... Emergence of domains and nonlinear transport in the zero-resistance state by I. A. Dmitriev; M. Khodas; A. D. Mirlin; D. G. Polyakov We study transport in the domain state, the so-called zero-resistance state, that emerges in a two-dimensional electron system in which the combined action of microwave radiation and magnetic field produces a negative absolute conductivity. We show that the voltage-biased system has a rich phase diagram in the system size and voltage plane, with second- and first-order transitions between the domain and homogeneous states for small and large voltages, respectively. We find the residual negative... Nonequilibrium phenomena in high Landau levels by I. A. Dmitriev; A. D. Mirlin; D. G. Polyakov; M. A. Zudov Developments in the physics of 2D electron systems during the last decade have revealed a new class of nonequilibrium phenomena in the presence of a moderately strong magnetic field. The hallmark of these phenomena is magnetoresistance oscillations generated by the external forces that drive the electron system out of equilibrium. The rich set of dramatic phenomena of this kind, discovered in high mobility semiconductor nanostructures, includes, in particular, microwave radiation-induced... Relaxation processes in a disordered Luttinger liquid by D. A. Bagrets; I. V. Gornyi; A. D. Mirlin; D. G. Polyakov The Luttinger liquid model, which describes interacting electrons in a single-channel quantum wire, is completely integrable in the absence of disorder and as such does not exhibit any relaxation to equilibrium. We consider relaxation processes induced by inelastic electron-electron interactions in a disordered Luttinger liquid, focusing on the equilibration rate and its essential differences from the electron-electron scattering rate as well as the rate of phase relaxation. In the first part... Nonadiabatic scattering of a quantum particle in an inhomogenous magnetic field by F. Dahlem; F. Evers; A. D. Mirlin; D. G. Polyakov; P. Woelfle We investigate the quantum effects, in particular the Landau-level quantization, in the scattering of a particle the nonadiabatic classical dynamics of which is governed by an adiabatic invariant. As a relevant example, we study the scattering of a drifting particle on a magnetic barrier in the quantum limit where the cyclotron energy is much larger than a broadening of the Landau levels induced by the nonadiabatic transitions. We find that, despite the level quantization, the exponential... Quantum interference and spin-charge separation in a disordered Luttinger liquid by A. G. Yashenkin; I. V. Gornyi; A. D. Mirlin; D. G. Polyakov We study the influence of spin on the quantum interference of interacting electrons in a single-channel disordered quantum wire within the framework of the Luttinger liquid (LL) model. The nature of the electron interference in a spinful LL is particularly nontrivial because the elementary bosonic excitations that carry charge and spin propagate with different velocities. We extend the functional bosonization approach to treat the fermionic and bosonic degrees of freedom in a disordered spinful... Strong magnetoresistance induced by long-range disorder by A. D. Mirlin; J. Wilke; F. Evers; D. G. Polyakov; P. Woelfle We calculate the semiclassical magnetoresistivity $\rho_{xx}(B)$ of non-interacting fermions in two dimensions moving in a weak and smoothly varying random potential or random magnetic field. We demonstrate that in a broad range of magnetic fields the non-Markovian character of the transport leads to a strong positive magnetoresistance. The effect is especially pronounced in the case of a random magnetic field where $\rho_{xx}(B)$ becomes parametrically much larger than its B=0 value. Zero-frequency anomaly in quasiclassical ac transport: Memory effects in a two-dimensional metal with a long-range random potential or random magnetic field by J. Wilke; A. D. Mirlin; D. G. Polyakov; F. Evers; P. Woelfle We study the low-frequency behavior of the {\it ac} conductivity $\sigma(\omega)$ of a two-dimensional fermion gas subject to a smooth random potential (RP) or random magnetic field (RMF). We find a non-analytic $\propto|\omega|$ correction to ${\rm Re} \sigma$, which corresponds to a $1/t^2$ long-time tail in the velocity correlation function. This contribution is induced by return processes neglected in Boltzmann transport theory. The prefactor of this $|\omega|$-term is positive and... Quantum Hall ferromagnets, cooperative transport anisotropy, and the random field Ising model by J. T. Chalker; D. G. Polyakov; F. Evers; A. D. Mirlin; P. Woelfle We discuss the behaviour of a quantum Hall system when two Landau levels with opposite spin and combined filling factor near unity are brought into energetic coincidence using an in-plane component of magnetic field. We focus on the interpretation of recent experiments under these conditions [Zeitler et al, Phys. Rev. Lett. 86, 866 (2001); Pan et al, Phys. Rev. B 64, 121305 (2001)], in which a large resistance anisotropy develops at low temperatures. Modelling the systems involved as Ising... Mechanisms of the microwave photoconductivity in 2D electron systems with mixed disorder by I. A. Dmitriev; M. Khodas; A. D. Mirlin; D. G. Polyakov; M. G. Vavilov We present a systematic study of the microwave-induced oscillations in the magnetoresistance of a 2D electron gas for mixed disorder including both short-range and long-range components. The obtained photoconductivity tensor contains contributions of four distinct transport mechanisms. We show that the photoresponse depends crucially on the relative weight of the short-range component of disorder. Depending on the properties of disorder, the theory allows one to identify the temperature range... Compressibility of a 2D electron gas under microwave radiation by M. G. Vavilov; I. A. Dmitriev; I. L. Aleiner; A. D. Mirlin; D. G. Polyakov Microwave irradiation of a two-dimensional electron gas (2DEG) produces a non-equilibrium distribution of electrons, and leads to oscillations in the dissipative part of the conductivity. We show that the same non-equilibrium electron distribution induces strong oscillations in the 2DEG compressibility measured by local probes. Local measurements of the compressibility are expected to provide information about the domain structure of the zero resistance state of a 2DEG under microwave radiation. Theory of microwave-induced oscillations in the magnetoconductivity of a 2D electron gas by I. A. Dmitriev; M. G. Vavilov; I. L. Aleiner; A. D. Mirlin; D. G. Polyakov We develop a theory of magnetooscillations in the photoconductivity of a two-dimensional electron gas observed in recent experiments. The effect is governed by a change of the electron distribution function induced by the microwave radiation. We analyze a nonlinearity with respect to both the dc field and the microwave power, as well as the temperature dependence determined by the inelastic relaxation rate. Theory of the oscillatory photoconductivity of a 2D electron gas Magnetotransport of electrons in quantum Hall systems by I. A. Dmitriev; F. Evers; I. V. Gornyi; A. D. Mirlin; D. G. Polyakov; P. Wölfle Recent theoretical results on magnetotransport of electrons in a 2D system in the range of moderately strong transverse magnetic fields are reviewed. The phenomena discussed include: quasiclassical memory effects in systems with various types of disorder, transport in lateral superlattices, interaction-induced quantum magnetoresistance, quantum magnetooscillations in dc and ac transport, and oscillatory microwave photoconductivity. Tunneling into a Luttinger liquid revisited by D. N. Aristov; A. P. Dmitriev; I. V. Gornyi; V. Yu. Kachorovskii; D. G. Polyakov; P. Wölfle We study how electron-electron interactions renormalize tunneling into a Luttinger liquid beyond the lowest order of perturbation in the tunneling amplitude. We find that the conventional fixed point has a finite basin of attraction only in the point contact model, but a finite size of the contact makes it generically unstable to the tunneling-induced break up of the liquid into two independent parts. In the course of renormalization to the nonperturbative-in-tunneling fixed point, the...
CommonCrawl
TR20-120 | 12th August 2020 01:51 A Parallel Repetition Theorem for the GHZ Game TR20-120 Authors: Justin Holmgren, Ran Raz Publication: 12th August 2020 02:02 Parallel Repetition We prove that parallel repetition of the (3-player) GHZ game reduces the value of the game polynomially fast to 0. That is, the value of the GHZ game repeated in parallel $t$ times is at most $t^{-\Omega(1)}$. Previously, only a bound of $\approx \frac{1}{\alpha(t)}$, where $\alpha$ is the inverse Ackermann function, was known. The GHZ game was recently identified by Dinur, Harsha, Venkat and Yuen as a multi-player game where all existing techniques for proving strong bounds on the value of the parallel repetition of the game fail. Indeed, to prove our result we use a completely new proof technique. Dinur, Harsha, Venkat and Yuen speculated that progress on bounding the value of the parallel repetition of the GHZ game may lead to further progress on the general question of parallel repetition of multi-player games. They suggested that the strong correlations present in the GHZ question distribution represent the ``hardest instance'' of the multi-player parallel repetition problem. Another motivation for studying the parallel repetition of the GHZ game comes from the field of quantum information. The GHZ game, first introduced by Greenberger, Horne and Zeilinger, is a central game in the study of quantum entanglement and has been studied in numerous works. For example, it is used for testing quantum entanglement and for device-independent quantum cryptography. In such applications a game is typically repeated to reduce the probability of error, and hence bounds on the value of the parallel repetition of the game may be useful.
CommonCrawl
Understanding row operations on a row with $0$'s for all entries except for the $\vec b$ solution vector entries In practice reducing matrices to row echelon form, I've encountered scenarios in which I have a row like row 3 in the following matrix: $$ \begin{bmatrix} 1 & -3 & 3\\ 0 & 1 & 3 \\ 0 & 0 & 3\\ \end{bmatrix} $$ This matrix consists of input $x$ and $y$, with the third column as the solution vector. When trying to convert matrices to echelon form (which this is now in), I may sometimes have a row like the third. It is a funny thing, as when I try to convert the matrix to reduced echelon form, and performing the row operation, say: $$Upon \ R2 \Rightarrow R2-R3$$ The matrix becomes And then, for row 1 (to put zeros above the third pivot): And then to scale the third pivot to $1$ I scale the matrix $R3$ to $(1/3)R3$. This feels like cheating, allow I'm merely preforming row operations. I literally could've just drawn zeros above the third pivot and changing the $3$ to a $1$ instead of going through all that hassle. In addition, I haven't a clue what that row would mean in terms of its row picture. $0x + 0y = 3$? That seems preposterous. Is me arriving to such a row in my operations a mistake? And if not, what gives with this? To recap, my question is this: is the condition where the third row looks like way it does still allow for its matrix to have an existing reduced echelon form for it, and why does the fact that the matrix has no solution looking the way it does explain why row operations on it to put zeros above the third pivot are so trivial? linear-algebra gaussian-elimination sangstar sangstarsangstar $\begingroup$ If your calculations are correct, then the only interpretation is that the system doesn't have a solution, because there is no x and y that satisfy $0x+0y=3$ $\endgroup$ – Matheus Fernandes Sep 18 '17 at 19:42 $\begingroup$ This still makes it reducible, though, right? And how does that lend to explain how putting zeros above the third pivot is so trivial? $\endgroup$ – sangstar Sep 18 '17 at 19:43 $\begingroup$ What's the original problem? I'm not sure I understand what your question is, maybe edit your original post with the original problem. $\endgroup$ – Matheus Fernandes Sep 18 '17 at 19:47 $\begingroup$ @MatheusFernandes Sure, I edited it. $\endgroup$ – sangstar Sep 18 '17 at 19:52 I think I understand your question now. Let's tackle this problem piece by piece, ok? First of all, you have a system of 2 variables $x$ and $y$, but you have 3 equations in the system. Surely you remember that you only need two equations to reach a solution for two variables, so what would be the consequence of there being three equations, not two? One consequence would be that the third line is unnecessary to solve the problem, that it's a multiple of the other lines, a linear combination of the other two lines, like this: $$x+y=4$$ $$2x-3y=-7$$ $$3x-2y=-3$$ You see, adding the first two equations gives you the third equation. If you applied the Guass-Jordan method to that system, you'd get the following matrix: \begin{bmatrix} 1 & 0 & 1\\ 0 & 1 & 3\\ 0 & 0 & 0\\ \end{bmatrix} Meaning, quite rightly, that $x=1$, $y=3$ and that $0x+0y=0$, which is not really that useful to us. Now, if the system only had two equations, not three, and in the end you had $0x+0y=0$, that could only mean that there infinite solutions to this problem. You see, there would be an infinite combinations of $x$ and $y$ that fit that solution. In the example I gave you, however, there is only one solution, and the extra line is just that: extra. In your problem, though, you seem to be confused in two parts. One is that you could just "replace" the number above the 3 in the third row with zeros. That's not really what you're doing, you are actually multiplying the third row by -1 and adding it to other rows. The thing is, the other numbers are 0, so they don't actually change anything, which I think is source of your confusion. The second part is that pesky $0x+0y=3$. The matrix is indeed correct, that line is fine as it is, you don't have to change it. What it means here is something impossible, how can a number times 0 plus another number times 0 equal 3? The answer is, of course, that there are no numbers that can satisfy that equation, and the only conclusion is that the system has no solutions. That's the reason people use the Gauss-Jordan method, it lets you know if a system has no solutions, one solution, or an infinite number of solutions. Matheus FernandesMatheus Fernandes $\begingroup$ I see. So I've discovered the system has no solution. I still could turn it into reduced row echelon form, though right? It just seems to be very easy to create $0$'s above that pivot in the quaint third row. $\endgroup$ – sangstar Sep 19 '17 at 19:18 $\begingroup$ It's just what the method is, the only reason it's easy is because the other numbers are all $0$. Notice that that's what happens naturally whenever you use the Gauss-Jordan method, starting top to bottom, left to right, you never interfere with what you've already done, because of the zeroes. $\endgroup$ – Matheus Fernandes Sep 20 '17 at 6:44 Not the answer you're looking for? Browse other questions tagged linear-algebra gaussian-elimination or ask your own question. Matrices - Understanding row echelon form and reduced echelon form Can a set of 4 vectors with 3 entries each only span R2 if the third row reduces to all zeros? Is the U factor in LU decomposition for rectangular matrices always in row echelon form? What is the intuition behind the reduced row echelon form of a matrix? Is it okay to determine pivot positions in a matrix in echelon form, not in reduced echelon form? Give the corresponding elementary matrix decomposition of A Writing a matrix as a product of elementary matrices. Find the column space, null space and special solution for a matrix. Construct a row reduced echelon matrix from given solution: $\vec{x}$ How to put a matrix in Reduced Echelon Form
CommonCrawl
A single-model quality assessment method for poor quality protein structure Jianquan Ouyang ORCID: orcid.org/0000-0002-7518-51561, Ningqiao Huang1 & Yunqi Jiang2 Quality assessment of protein tertiary structure prediction models, in which structures of the best quality are selected from decoys, is a major challenge in protein structure prediction, and is crucial to determine a model's utility and potential applications. Estimating the quality of a single model predicts the model's quality based on the single model itself. In general, the Pearson correlation value of the quality assessment method increases in tandem with an increase in the quality of the model pool. However, there is no consensus regarding the best method to select a few good models from the poor quality model pool. We introduce a novel single-model quality assessment method for poor quality models that uses simple linear combinations of six features. We perform weighted search and linear regression on a large dataset of models from the 12th Critical Assessment of Protein Structure Prediction (CASP12) and benchmark the results on CASP13 models. We demonstrate that our method achieves outstanding performance on poor quality models. According to results of poor protein structure assessment based on six features, contact prediction and relying on fewer prediction features can improve selection accuracy. Proteins are large, important biological molecules. Direct prediction of a protein's tertiary structure based on amino acid sequence is a challenging problem that has a significant impact on modern biology and medicine. The results of such predictions play key roles in understanding of protein function, design of proteins for new biological functions, and research and development of new drugs. With the completion of the Human Genome Project, more proteins' amino acid sequences have been analysed by genome-sequencing technologies. Although the number of known protein amino acid sequences is increasing rapidly, the number of experimentally determined structures has lagged far behind the speed of amino acid analysis [1]. Meanwhile, scientific researchers have continued exploring and practicing. The main experimental methods are currently X-ray crystallography [2], NMR (Nuclear Magnetic Resonance), and Cryo-EM [3]. These existing methods often require much time and expensive resources, which prevents the speed of experimental protein structure determination from keeping up with the explosive growth in the number of available sequences [4]. One major challenge in structure prediction is selection of the best model from a pool of generated models. Protein structure prediction applications such as Rosetta [5,6,7,8] generate a large number of models of highly variable quality, but it is difficult to predict which is the closest to the native structure. There are currently two main types of model ranking. The first type is consensus methods, which calculate the model's average similarity score against that of other models and usually assume that the pool's higher-quality models have higher similarity with the other models in the pool. Consensus methods can usually achieve better performance, but experiments have shown that to realize this advantage, a large number of structural models of the same protein are required. The time complexity of consensus methods is O(n^2). If the number of structural models of the same kind of protein target is very large, the time and resources required for its calculation also become huge. In addition, these methods do not work when evaluating a single or few structural models of the same kind of protein target. The other type of ranking method is single-model quality assessment, which does not need to rely on other structural models. Currently, most single-model quality assessment methods use physics-based knowledge and evolutionary information from the sequences and can produce extracted output in the formats of features from other tools. For example, ProQ3 [9] extracts several features from sequence and energy terms produced by Rosetta in a format that can serve as input to a Support Vector Machine model. Another single-model quality assessment method, DeepQA [10], applies a belief network to 16 features. It relies on features from other tools, so when those tools have errors, increasing dependency on them does not improve accuracy. In summary, the existing single-model assessment methods use dozens of features obtained from other tools, and the more inaccurate features may cause the prediction results to be less credible. In addition, most model quality assessment methods and energy scoring functions perform well when model quality is better, but they often have bad performance on models of poor quality. Therefore, we hope to explore a single-model quality assessment method that uses a few features as input to choose the best quality examples possible from the poor quality model set, which becomes the original motivation. In this paper, we propose a novel method for assessment of poor quality models that combines physics-based knowledge, tertiary structure properties, and physical properties derived from amino acid sequences. The total score is obtained by simple linear combinations rather than complex models such as machine learning. After performing a linear fit and weight random search on a large amount of data, ab initio models were generated using Rosetta and poor quality models from the 13th Critical Assessment of Protein Structure Prediction (CASP13) were used for benchmarking. Our method shows outstanding performance on low-quality models. Performance comparison on FM domains Parts of proteins with known tertiary structures have been obtained by experimental methods. In most cases, homologous proteins have similar structures. Homology modelling is an effective method if a template with high homology similarity can be found [11]. However, not all targets have similar sequences in the Protein Data Bank, so the advantage of using a template is not applicable in all cases. In such situations, we usually use ab initio modelling methods, which are difficult to predict, and the quality of their results is not good without any constraints. Therefore, this type of model is our ideal evaluation target. We evaluated our method on the CASP13 dataset and compared it with other open source methods and some features of our method. We divided evaluation of the FM domains dataset into two stages. The first stage was the generation of 1000 decoys for each target using Rosetta ab initio. This simulated our method's model selection ability using real low quality prediction results. We used the standard evaluation metrics: average per-target correlation and average per-target quality based on GDT-TS. In addition, to compare the present method's performance with that of other methods, we selected the top 5 models for each method. Then, the top 1 model and the best of the top 5 models were evaluated separately (Table 1). In cases where the quality of the evaluation model was poor, all evaluation metrics were better under our method than the other investigated methods. Our model's performance was significantly better than that of other methods, especially in terms of the correlation and Z-score sum of the top 1 model. In addition, we used two typical machine learning models using the same input feature for evaluation. Although machine learning models used the same features as input, the performance was still worse than linear combination. Table 1 shows that the effects of the two weighting methods (Random Search and Linear Regression) are very similar. Therefore, in future research, we will only use search weights as comparison results. Table 1 Comparison results of various QA methods from Stage 1 To evaluate the prediction models produced by various methods, we used the low-quality models submitted by the CASP13 teams for Stage 2. Considering that the models submitted by some teams were not complete, some of the PDB files had amino acid residues missing. Therefore, the L value of the contact penalty term and the N value of the binary classification penalty term were determined on the basis of the amino acid residue sequences parsed from the PDB files. As shown in Table 2, the TM-score was also used as one of the evaluation metrics. In terms of the average Pearson correlation and the results for the top 1 model and top 5 models, our method still performs better than other methods. Table 2 Performance of various QA methods measured by GDT-TS and TM-scores (CASP13 FM domains, poor quality dataset) Performance comparison on TBM-hard domains Generally, target domains with good server predictions have close template homologs and are classified as TBM. To evaluate the accuracy of our method on TBM domains, we benchmarked our method's performance on TBM-hard domains from CASP13 models in which GDT-TS is not greater than 50. As shown in Table 3, our method achieves good performance in terms of correlations with both GDT-TS and TM-scores. Although the average per-target quality of ProQ3D on the top 1 match was better than ours, the different quality distributions of the ProQ3D model under different domains give our method an advantage in terms of Z-score sum. Our method's per-target average quality on the top 5 models based on TM-score was better than that of all other QA methods. Table 3 Performance of various QA methods measured by GDT-TS and TM-scores (CASP13 TBM-hard domains, poor quality dataset) The predicted distance potential can also be used for model quality assessment, which improves prediction accuracy greatly [12]. The more accurate the distance prediction, the better the quality of the models and the more accurate model quality assessment using distance potential. However, not all distances can be accurately predicted, leading to incorrect distance prediction that is not conducive to correct modelling. Our method uses more mature contact predictions as a feature to evaluate model quality. Contact prediction can reduce distance prediction error by confining distance prediction within a certain range. Poor quality structural prediction models often have low distance prediction accuracy. It is not optimal to use inaccurate distance predictions in model selection. Thus, accurate distance prediction is not required for quality assessment of poor quality models, and fuzzy contact prediction is more appropriate. To reduce complexity as much as possible, our method uses only six features as input. Many current quality assessment methods use deep learning. Because the prediction results of other tools are often used as feature inputs, the use of too many features may not improve accuracy. Moreover, most tools have bad prediction performance in terms of correlations with poor quality models (e.g. Table 1). ML models training using inaccurate feature input may not generate good results, so we use as few features and as simple of a model as possible to achieve the best results. Nevertheless, when the predictions of the tools that our method relies on have significant errors, the effect is still not satisfactory. In future work, to avoid inaccurate tool predictions, we will consider intercepting contact information from aligned homologous protein structures and rely on physical and chemical features that can be directly extracted from the protein structure as much as possible. In this paper, we have developed a single-model quality assessment method of protein tertiary structure prediction for poor quality models. It ranks models using a linear combination of six features. This method achieves good performance on CASP13 submission models and ab initio models generated by Rosetta. We believe that contact prediction is important information and that using it appropriately could further improve performance on poor quality models. Input features To reduce model complexity and improve accuracy on poor quality models, we selected six features as inputs for our method. These features statistically describe the potential, physio-chemical, and structural attributes of a protein model. DOPE (Discrete Optimized Protein Energy) [13], a representative statistical potential scoring method, obtains relevant statistics about atomic distances from known native protein structures. It also assists researchers with quality assessment, supported by probability theory. GOAP (Generalized Orientation-dependent All-atom Potential) [14] is another scoring method based on atomic distance and angle-related statistical knowledge, which can supplement DOPE. Secondary structure information is essential to describe the physio-chemical properties of proteins. Although the number of possible conformations of a polypeptide chain is quite large, the conformational space would be vastly reduced if the secondary structure can be determined. At present, the best secondary structure prediction methods can reach accuracy rates of more than 80%. In this study, we use PSSpredv4 [15, 16] to predict secondary structure and use it as a benchmark to evaluate structure. Another important physical property of proteins is solvent accessibility. Here, we use SSPro4 [17, 18] to predict the solvent accessibility of the target protein and then compare the result with the estimated structure to impart a scoring penalty. Finally, accurate contact predictions can help us to limit the number of conformational searches at atomic distance. In recent years, with the development of deep learning, prediction of protein contact has become more accurate. As a representative protein contact prediction method, we use RaptorX [19,20,21,22] to calculate the penalty term for contact. Table 4 is a summary table of all features. The quality assessment method consists of six parts, and the total score is obtained by weighting the sum of each part: $$ {\displaystyle \begin{array}{c}E={\omega}_1{E}_{dope}+{\omega}_2{E}_{goap}+{\omega}_3{E}_{s\mathrm{s}\_h}+\\ {}{\omega}_4{E}_{s\mathrm{s}\_e}+{\omega}_5{E}_{sa}+{\omega}_6{E}_{contact}\#(1)\end{array}} $$ Table 4 Six features of our method The Ess prefix indicates the penalty term of the secondary structure, and there is subdivision between the α-helix and β-strand penalty terms, indicated by Ess _ h and Ess _ e. The solvent accessibility penalty term is Esa. We predict this term by ACCPro4, which returns a sequence result for each residue in binary classification format, indicating each residue's solubility. With a solvent accessibility threshold of 25%, each residue is normalized to the maximum exposed surface area calculated using DSSP [23]. Econtact indicates the contact penalty. After the contact prediction results are sorted by confidence level, the top L residue pairs are taken, where L is the length of the amino acid sequence. We then find the corresponding residue pair in the structure and calculate the distance between the residues' Cβ atoms (Cα for GLY). If the actual distance is no more than 8 Å, the position is consistent with the predicted structure. DOPE and GOAP use the number of amino acid residues for normalization. In the above binary classification penalty items, the penalty score is: $$ \frac{\sum_{i=i}^N\left({x}_i\bigoplus {p}_i\right)}{N}\#(2) $$ Here, xi is the parsing result of each amino acid residue, and pi is the corresponding prediction. The penalty term calculates the XOR value between the prediction of the corresponding amino acid residue position and the parsing result. Occasionally, if a feature cannot be calculated because of a tool failure, its value is set to 0.5. Dataset and weights We collected CASP12 models from the CASP website http://predictioncenter.org/download_area/CASP12/predictions. We also downloaded the corresponding score table from the CASP website. The official CASP uses GDT-TS [24] (based on the LGA [25] package) as an important evaluation criterion. We extracted GDT-TS from the score table as the quality score. After excluding targets without the score table file, all remaining data models contain 75 whole chain targets, and the average target has 400 decoys. In total, we used 28,879 structural decoys covering a wide range of qualities to search weights. To evaluate the performance of our method, we used 18,717 models in which GDT-TS was not greater than 50, they are from CASP13 human groups' 31 FM (Free modelling) domains and 18 TBM (Template-based modelling)-hard domains (because of the limitation of RaptorX server on the length of the amino acid sequence, we discard T0985-D1). In addition, we used Rosetta ab initio to generate 1000 decoys for each FM domain from CASP13. For discontinuous domains, such as 77–134 and 348–520, we took the minimum and maximum of the interval to obtain 77–520. The GDT-TS score range for these data is shown in Fig. 1. Most of the generated decoys had GDT-TS scores below 50, and the lowest score was between 10 and 20. The best decoy quality in T0990-D1 was 64.8. Dataset can be downloaded from Google Drive, https://drive.google.com/file/d/12YscAvt1ZQCupdKopW8k9c1VUZu6NR03. GDT-TS range of ab initio predictions made using Rosetta for targets in 31 FM (Free Modelling) domains of CASP13. The range of the generated decoys for each target is represented by a box plot, with the minimum, lower quartile, median, upper quartile, and maximum values in turn from left to right We developed two methods to obtain the weight values: one is to use linear regression, and the other is to perform a random search on the weights to obtain the maximum total correlation. We used 75 targets from the CASP12 dataset together to search for the optimized weights. The correlation matrix of each of our scores with the CASP12 data shows that the contact score had the best correlation, and the secondary structure folding score had the worst (Fig. 2). We calculated the MSE for GDT-TS using linear fit, obtaining values of 1.50, 1.79, 2.57, 2.54, 1.35, and 4.17 for DOPE, GOAP, SA, SS_E, SS_H, and Contact, respectively. Because our goal was to rank the structural models, we did not need the intercept value obtained from the linear regression. The goal of the weight random search is to optimize the sum of the correlation coefficients of all targets. Considering that the weights have a different impact on each target's relevance, we used a random search method. The starting weights of the six features are float type variables in the range [0,1], and then random weight searches are performed to maximize the sum of the target correlation coefficients. Correlation matrix of our method's features obtained using CASP12 data, the true scores corresponding to the ground truth GDT-TS score The random search was performed in 3 rounds, with each round lasting 10,000 steps. After each round of searching, we extracted the top 100 weight combinations and used their maximum and minimum values for each weight as the range for the next round of searching. If a weight's search value is focused on the range boundary, the next round of search expands or reduces the weight's range by 10 times. After three rounds of weight search, the sum of the correlation coefficients had risen to 54.39. We extracted the top 50 weight combinations in terms of correlation coefficients after the third round of searching (Fig. 3). The contact penalty term was given a larger weight, and the resulting distribution was more concentrated. A small weight was assigned to GOAP to reduce its impact on the results. We assigned the average value of each weight from the top 50 weight combinations as the final result. The final values were 4.96, 0.4, 70.65, 65.5, 76.5, and 168.16 for DOPE, GOAP, SA, SS_E, SS_H, and Contact, respectively. Ranges of the top 50 weight combinations obtained using CASP12 data and random search. The weight distribution range of each feature uses the box plot of the corresponding colour of the legend to represent the minimum, lower quartile, median, upper quartile, and maximum value All data used in this study are available from the corresponding author upon request. Ovchinnikov S, Park H, Varghese N, Huang P-S, Pavlopoulos GA, Kim DE, et al. Protein structure determination using metagenome sequence data. Science. 2017;355(6322):294–8. Ayyer K, Yefanov OM, Oberthür D, Roy-Chowdhury S, Galli L, Mariani V, et al. Macromolecular diffractive imaging using imperfect crystals. Nature. 2016;530(7589):202–6. Bai X-C, McMullan G, Scheres SH. How cryo-EM is revolutionizing structural biology. Trends Biochem Sci. 2015;40(1):49–57. Marks DS, Hopf TA, Sander C. Protein structure prediction from sequence variation. Nat Biotechnol. 2012;30(11):1072. Simons KT, Strauss C, Baker D. Prospects for ab initio protein structural genomics. J Mol Biol. 2001;306(5):1191–9. Das R, Baker D. Macromolecular modeling with rosetta. Annu Rev Biochem. 2008;77:363–82. Baker D, Sali A. Protein structure prediction and structural genomics. Science. 2001;294(5540):93–6. Bradley P, Malmström L, Qian B, Schonbrun J, Chivian D, Kim DE, et al. Free modeling with Rosetta in CASP6. Proteins: Struct, Funct, Bioinformatics. 2005;61(S7):128–34. Uziela K, Shu N, Wallner B, Elofsson A. ProQ3: improved model quality assessments using Rosetta energy terms. Sci Rep. 2016;6(1):1–10. Cao R, Bhattacharya D, Hou J, Cheng J. DeepQA: improving the estimation of single protein model quality with deep belief networks. BMC bioinformatics. 2016;17(1):495. Kryshtafovych A, Monastyrskyy B, Fidelis K, Moult J, Schwede T, Tramontano A. Evaluation of the template-based modeling in CASP12. Proteins: Struct, Funct, Bioinformatics. 2018;86:321–34. Yang J, Anishchenko I, Park H, Peng Z, Ovchinnikov S, Baker D. Improved protein structure prediction using predicted interresidue orientations. Proc Natl Acad Sci. 2020. Shen M, Sali A. Statistical potential for assessment and prediction of protein structures. Protein Sci. 2006;15(11):2507–24. Zhou H, Skolnick J. GOAP: a generalized orientation-dependent, all-atom statistical potential for protein structure prediction. Biophys J. 2011;101(8):2043–52. Yan R, Xu D, Yang J, Walker S, Zhang Y. A comparative assessment and analysis of 20 representative sequence alignment methods for protein structure prediction. Sci Rep. 2013;3(1):1–9. Yang J, Yan R, Roy A, Xu D, Poisson J, Zhang Y. The I-TASSER suite: protein structure and function prediction. Nat Methods. 2015;12(1):7. Cheng J, Randall AZ, Sweredoski MJ, Baldi P. SCRATCH: a protein structure and structural feature prediction server. Nucleic Acid Res. 2005;33(suppl_2):W72–W6. Pollastri G, Baldi P, Fariselli P, Casadio R. Prediction of coordination number and relative solvent accessibility in proteins. Proteins: Struct, Funct, Bioinformatics. 2002;47(2):142–53. Wang S, Sun S, Li Z, Zhang R, Xu J. Accurate de novo prediction of protein contact map by ultra-deep learning model. PLoS Comput Biol. 2017;13(1):e1005324. Wang S, Li Z, Yu Y, Xu J. Folding membrane proteins by deep transfer learning. Cell Syst. 2017;5(3):202–11. e3. Wang S, Sun S, Xu J. Analysis of deep learning methods for blind protein contact prediction in CASP12. Proteins: Struct, Funct, Bioinormatics. 2018;86:67–77. Wang S, Li W, Zhang R, Liu S, Xu J. CoinFold: a web server for protein contact prediction and contact-assisted protein folding. Nucleic Acids Res. 2016;44(W1):W361–W6. Li J, Cao R, Cheng J. A large-scale conformation sampling and evaluation server for protein tertiary structure prediction and its assessment in CASP11. BMC bioinformatics. 2015;16(1):337. Zhang Y, Skolnick J. Scoring function for automated assessment of protein structure template quality. Proteins: Struct, Funct, Bioinformatics. 2004;57(4):702–10. Zemla A. LGA: a method for finding 3D similarities in protein structures. Nucleic Acids Res. 2003;31(13):3370–4. This research has been supported by Key Projects of the Ministry of Science and Technology of the People's Republic of China (2018AAA0102301). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. key Laboratory of Intelligent Computing & Information Processing, Ministry of Education, Xiangtan University, Xiangtan, China Jianquan Ouyang & Ningqiao Huang College of Chemistry, Xiangtan University, Xiangtan, China Yunqi Jiang Jianquan Ouyang Ningqiao Huang JQOY designed and directed the project; YQJ created the dataset; NQH analysed the data; JQOY and NQH wrote the manuscript. All authors read and approved the final manuscript. Correspondence to Jianquan Ouyang. Ouyang, J., Huang, N. & Jiang, Y. A single-model quality assessment method for poor quality protein structure. BMC Bioinformatics 21, 157 (2020). https://doi.org/10.1186/s12859-020-3499-5 Protein structure ranking Protein model quality assessment Poor quality protein structural Linear combination
CommonCrawl
Three basic questions on Boolean functions Analysis of Hidden Number Problem with Hidden Multiplier November 2017, 11(4): 813-835. doi: 10.3934/amc.2017060 Capacity of random channels with large alphabets Tobias Sutter 1, , David Sutter 2, and John Lygeros 1, Automatic Control Laboratory, ETH Zurich, Switzerland Institute for Theoretical Physics, ETH Zurich, Switzerland Received November 2016 Published November 2017 Fund Project: TS and JL were supported by the ETH grant (ETH-15 12-2). DS acknowledges support by the Swiss National Science Foundation (SNSF) via the National Centre of Competence in Research QSIT and by the Air Force Office of Scientific Research (AFOSR) via grant FA9550-16-1-0245 Figure(2) We consider discrete memoryless channels with input alphabet size $n$ and output alphabet size $m$, where $m=\left\lceil{γ n}\right\rceil$ for some constant $γ>0$. The channel transition matrix consists of entries that, before being normalized, are independent and identically distributed nonnegative random variables $V$ and such that $\mathbb{E}{(V \log V)^2}<∞$. We prove that in the limit as $n{\to }∞$ the capacity of such a channel converges to $\text{Ent}(V) / \mathbb{E}[V]$ almost surely and in $\text{L}^{2}$, where $\text{Ent}(V):= \mathbb{E}[{V\log V}]-\mathbb{E}[{V}]\log \mathbb{E}[{V}]$ denotes the entropy of $V$. We further show that, under slightly different model assumptions, the capacity of these random channels converges to this asymptotic value exponentially in $n$. Finally, we present an application in the context of Bayesian optimal experiment design. Keywords: Channel capacity, duality of convex programming, random channel, concentration of measure, Bayesian experiment design. Mathematics Subject Classification: Primary: 94A15, 94A17; Secondary: 62B10. Citation: Tobias Sutter, David Sutter, John Lygeros. Capacity of random channels with large alphabets. Advances in Mathematics of Communications, 2017, 11 (4) : 813-835. doi: 10.3934/amc.2017060 S. Arimoto, An algorithm for computing the capacity of arbitrary discrete memoryless channels, IEEE Transactions on Information Theory, 18 (1972), 14-20. Google Scholar D. P. Bertsekas, Convex Optimization Theory Athena Scientific optimization and computation series, Athena Scientific, 2009. Google Scholar E. Biglieri, J. Proakis and S. Shamai, Fading channels: Information-theoretic and communications aspects, IEEE Transactions on Information Theory, 44 (1998), 2619-2692. doi: 10.1109/18.720551. Google Scholar R.E. Blahut, Computation of channel capacity and rate-distortion functions, IEEE Transactions on Information Theory, 18 (1972), 460-473. Google Scholar S. Boucheron, G. Lugosi and P. Massart, Concentration Inequalities Oxford University Press, Oxford, 2013, URL http://dx.doi.org/10.1093/acprof:oso/9780199535255.001.0001, A nonasymptotic theory of independence. Google Scholar A. G. Busetto, A. Hauser, G. Krummenacher, M. Sunnåker, S. Dimopoulos, C. S. Ong, J. Stelling and J. M. Buhmann, Near-optimal experimental design for model selection in systems biology., Bioinformatics, 29 (2013), 2625-2632, URL http://dblp.uni-trier.de/db/journals/bioinformatics/bioinformatics29.html#BusettoHKSDOSB13. doi: 10.1093/bioinformatics/btt436. Google Scholar M. Chiang, Geometric programming for communication systems, Foundations and Trends in Communications and Information Theory, 2 (2005), 1-154. doi: 10.1561/0100000005. Google Scholar M. Chiang and S. Boyd, Geometric programming duals of channel capacity and rate distortion, IEEE Transactions on Information Theory, 50 (2004), 245-258. doi: 10.1109/TIT.2003.822581. Google Scholar T. M. Cover and J. A. Thomas, Elements of Information Theory Wiley Interscience, 2006. Google Scholar L. Devroye, Nonuniform Random Variate Generation Springer-Verlag, New York, 1986, URL http://dx.doi.org/10.1007/978-1-4613-8643-8. Google Scholar R. Durrett, Probability: Theory and Examples Cambridge University Press, 2010. Google Scholar M.B. Hastings, Superadditivity of communication capacity using entangled inputs, Nature Physics, 5 (2009), 255-257. doi: 10.1038/nphys1224. Google Scholar A. S. Holevo, Quantum Systems, Channels, Information De Gruyter Studies in Mathematical Physics 16,2012. Google Scholar J. Huang and S.P. Meyn, Characterization and computation of optimal distributions for channel coding, IEEE Transactions on Information Theory, 51 (2005), 2336-2351. doi: 10.1109/TIT.2005.850108. Google Scholar D.V. Lindley, On a measure of the information provided by an experiment, Ann. Math. Statist., 27 (1956), 986-1005. doi: 10.1214/aoms/1177728069. Google Scholar M. Raginsky and I. Sason, Concentration of measure inequalities in information theory, communications, and coding, Foundations and Trends in Communications and Information Theory, 10 (2013), 1-246. doi: 10.1561/0100000064. Google Scholar R. T. Rockafellar, Convex Analysis Princeton Landmarks in Mathematics and Physics Series, Princeton University Press, 1997. Google Scholar W. Rudin, Principles of Mathematical Analysis 3rd edition, McGraw-Hill Book Co., New York-Auckland-Düsseldorf, 1976, International Series in Pure and Applied Mathematics. Google Scholar C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, 27 (1948), 379-423,623-656, URL http://math.harvard.edu/~ctm/home/text/others/shannon/entropy/entropy.pdf. doi: 10.1002/j.1538-7305.1948.tb01338.x. Google Scholar T. Sutter, D. Sutter, P. Mohajerin Esfahani and J. Lygeros, Efficient approximation of channel capacities, Information Theory, IEEE Transactions on, 61 (2015), 1649-1666. doi: 10.1109/TIT.2015.2401002. Google Scholar A.M. Tulino and S. Verdú, Random matrix theory and wireless communications, Foundations and Trends in Communications and Information Theory, 1 (2014), 1-182. doi: 10.1561/0100000001. Google Scholar Figure 1. For different alphabet sizes $n$ we plot the capacity of five random channels, constructed as explained in Example 3.1. The method introduced in [20] is used to determine upper and lower bounds for the capacity for finite alphabet sizes $n$. The asymptotic capacity (for $n\to \infty$) is depicted by the dashed line Figure Options Download as PowerPoint slide Figure 2. For different alphabet sizes $n$, we plot in (a) the empirical mean of the maximum expected information gain (blue line) $\tfrac{1}{N} \sum_{i=1}^N \sup_{\lambda\in\Lambda} I(p, \mathsf{W}_i^{(\lambda, V, n)})$, where $(\mathsf{W}_i^{(\lambda, V, n)})_{i=1}^N$ are independent channels and $N=1000$. The red line represents the empirical mean of the suboptimal expected information gain, that is given by $\tfrac{1}{N} \sum_{i=1}^N I(p, \mathsf{W}_i^{(\hat{\lambda}, V, n)})$, where $\hat{\lambda}$ are the optimal parameters for the asymptotic capacity, derived in Proposition 4.3. (b) depicts the empirical variance of the maximum expected information gain (blue line) as well as the empirical variance of the suboptimal expected information gain (red line) Lizhong Peng, Shujun Dang, Bojin Zhuang. Localization operator and digital communication capacity of channel. Communications on Pure & Applied Analysis, 2007, 6 (3) : 819-827. doi: 10.3934/cpaa.2007.6.819 Ahmed S. Mansour, Holger Boche, Rafael F. Schaefer. The secrecy capacity of the arbitrarily varying wiretap channel under list decoding. Advances in Mathematics of Communications, 2019, 13 (1) : 11-39. doi: 10.3934/amc.2019002 Jae Man Park, Gang Uk Hwang, Boo Geum Jung. Design and analysis of an adaptive guard channel based CAC scheme in a 3G-WLAN integrated network. Journal of Industrial & Management Optimization, 2010, 6 (3) : 621-639. doi: 10.3934/jimo.2010.6.621 Didier Georges. Infinite-dimensional nonlinear predictive control design for open-channel hydraulic systems. Networks & Heterogeneous Media, 2009, 4 (2) : 267-285. doi: 10.3934/nhm.2009.4.267 Qinghong Zhang, Gang Chen, Ting Zhang. Duality formulations in semidefinite programming. Journal of Industrial & Management Optimization, 2010, 6 (4) : 881-893. doi: 10.3934/jimo.2010.6.881 Hayden Schaeffer, John Garnett, Luminita A. Vese. A texture model based on a concentration of measure. Inverse Problems & Imaging, 2013, 7 (3) : 927-946. doi: 10.3934/ipi.2013.7.927 Sara D. Cardell, Joan-Josep Climent. An approach to the performance of SPC product codes on the erasure channel. Advances in Mathematics of Communications, 2016, 10 (1) : 11-28. doi: 10.3934/amc.2016.10.11 Julie Lee, J. C. Song. Spatial decay bounds in a linearized magnetohydrodynamic channel flow. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1349-1361. doi: 10.3934/cpaa.2013.12.1349 Xavier Litrico, Vincent Fromion. Modal decomposition of linearized open channel flow. Networks & Heterogeneous Media, 2009, 4 (2) : 325-357. doi: 10.3934/nhm.2009.4.325 M. Petcu. Euler equation in a channel in space dimension 2 and 3. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 755-778. doi: 10.3934/dcds.2005.13.755 Dandan Hu, Zhi-Wei Liu. Location and capacity design of congested intermediate facilities in networks. Journal of Industrial & Management Optimization, 2016, 12 (2) : 449-470. doi: 10.3934/jimo.2016.12.449 Jae Deok Kim, Ganguk Hwang. Cross-layer modeling and optimization of multi-channel cognitive radio networks under imperfect channel sensing. Journal of Industrial & Management Optimization, 2015, 11 (3) : 807-828. doi: 10.3934/jimo.2015.11.807 Sheng Wang, Xue An, Chen Yang, Long Liu, Yongchang Yu. Design and experiment of seeding electromechanical control seeding system based on genetic algorithm fuzzy control strategy. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020210 Yanqun Liu. Duality in linear programming: From trichotomy to quadrichotomy. Journal of Industrial & Management Optimization, 2011, 7 (4) : 1003-1011. doi: 10.3934/jimo.2011.7.1003 Xinmin Yang. On second order symmetric duality in nondifferentiable multiobjective programming. Journal of Industrial & Management Optimization, 2009, 5 (4) : 697-703. doi: 10.3934/jimo.2009.5.697 Gang Li, Lipu Zhang, Zhe Liu. The stable duality of DC programs for composite convex functions. Journal of Industrial & Management Optimization, 2017, 13 (1) : 63-79. doi: 10.3934/jimo.2016004 Anulekha Dhara, Aparna Mehra. Conjugate duality for generalized convex optimization problems. Journal of Industrial & Management Optimization, 2007, 3 (3) : 415-427. doi: 10.3934/jimo.2007.3.415 Carolyn Mayer, Kathryn Haymaker, Christine A. Kelley. Channel decomposition for multilevel codes over multilevel and partial erasure channels. Advances in Mathematics of Communications, 2018, 12 (1) : 151-168. doi: 10.3934/amc.2018010 Marco Cabral, Ricardo Rosa, Roger Temam. Existence and dimension of the attractor for the Bénard problem on channel-like domains. Discrete & Continuous Dynamical Systems - A, 2004, 10 (1&2) : 89-116. doi: 10.3934/dcds.2004.10.89 Cheng Ma, Y. C. E. Lee, Chi Kin Chan, Yan Wei. Auction and contracting mechanisms for channel coordination with consideration of participants' risk attitudes. Journal of Industrial & Management Optimization, 2017, 13 (2) : 775-801. doi: 10.3934/jimo.2016046 Tobias Sutter David Sutter John Lygeros
CommonCrawl
Can selection for resistance to OsHV-1 infection modify susceptibility to Vibrio aestuarianus infection in Crassostrea gigas? First insights from experimental challenges using primary and successive exposures Patrick Azéma1, Julien De Lorgeril2, Delphine Tourbiez1 & Lionel Dégremont1 Since 2008, the emergent virus OsHV-1µvar has provoked massive mortality events in Crassostrea gigas spat and juveniles in France. Since 2012, mortality driven by the pathogenic bacteria Vibrio aestuarianus has stricken market-sized adults. A hypothesis to explain the sudden increase in mortality observed in France since 2012 is that selective pressure due to recurrent viral infections could have led to a higher susceptibility of adults to Vibrio infection. In our study, two OsHV-1-resistant lines (AS and BS) and their respective controls (AC and BC) were experimentally challenged in the laboratory to determine their level of susceptibility to V. aestuarianus infection. At the juvenile stage, the selected lines exhibited lower mortality (14 and 33%) than the control lines (71 and 80%), suggesting dual-resistance to OsHV-1 and V. aestuarianus in C. gigas. Interestingly, this pattern was not observed at the adult stage, where higher mortality was detected for AS (68%) and BC (62%) than AC (39%) and BS (49%). These results were confirmed by the analysis of the expression of 31 immune-related genes in unchallenged oysters. Differential gene expression discriminated oysters according to their susceptibility to infection at both the juvenile and adult stages, suggesting that resistance to V. aestuarianus infection resulted in complex interactions between the genotype, stage of development and immunity status. Finally, survivors of the V. aestuarianus challenge at the juvenile stage still exhibited significant mortality at the adult stage during a second and third V. aestuarianus challenge, indicating that these survivors were not genetically resistant. The French oyster industry has regularly suffered from massive mortality episodes (Figure 1). In the early 1970s, the production of the Portuguese oyster Crassostrea angulata collapsed due to massive mortality related to an iridovirus [1], and the production of the flat oyster Ostrea edulis was significantly reduced due to two parasites (Martelia refringens and Bonamia ostreae) [2]. Once a disease affecting an oyster species has been introduced or detected in an area, resources to minimize its effect on oyster populations or production are very constrained. Neither large-scale drug use nor vaccination (because invertebrates have no acquired immunity) is possible in open marine areas due to the scale of the environment, and seawater or other organisms can easily convey pathogens from the reservoir to naïve stocks, thereby favoring the transmission of a disease. In this context, two main solutions have been proposed to sustain French oyster production: (1) develop a selective breeding program to enhance disease resistance using the genetic resources available in oyster populations, and more radically (2) introduce another species that is not sensitive to the disease. This second step was taken in France during the 1970s with the massive introduction of Crassostrea gigas from Japan and British Columbia to replace C. angulata during the RESOR operation (Figure 1) [3]. However, the introduction of new species is not recommended because it can lead to the introduction of new diseases in local populations [4], competition for habitats and resources, new invasive species and other constraints (regulatory rules, preliminary studies and biological barriers). French oyster production of C. angulata, C. gigas, and O. edulis since 1950. The main diseases that affected the production are indicated with red stars. French oyster production of C. gigas has ranged from 100 000 to 150 000 tons for several decades, but has unfortunately begun to decrease due to two diseases (Figure 1). Indeed, massive mortality events have occurred every year since 2008, with high mortality rates for spat and juveniles (over 70%). A particular OsHV-1 genotype (µvar) that was first described during a period of C. gigas mortality in 2004–2005 in Normandy [5] has been ascribed to the mortality [6]. Moreover, significant mortality has been observed in market-sized adults since 2012 [7–9], and C. gigas production is expected to decrease again (Figure 1). The main pathogenic agent found in the dying oysters harvested during these mortality episodes belonged to the species Vibrio aestuarianus. Because it would not be reasonable to introduce another oyster species to replace C. gigas, the possibility of genetic selection for disease resistance might limit the impact of diseases on wild and cultivated oysters. High mortality rates related to OsHV-1µvar have been observed in the field since 2008 [10], and it is probable that viral pressure on wild and cultivated oyster populations has been significant. Therefore, the emergence of high mortality in adults has made it legitimate to investigate whether the selection provoked by viral infection has an impact on the susceptibility of adults to bacterial infections and whether there are correlations or trade-offs between resistance to OsHV-1 and the expected resistance to V. aestuarianus. Consequently, it would be interesting to study whether the mass selection breeding program for C. gigas currently being managed at Ifremer [11] could enhance disease resistance to both OsHV-1 and V. aestuarianus. Evidence of OsHV-1 resistance was demonstrated in spat C. gigas in 2009 using oysters selected based on their higher resistance to the summer mortality phenomenon in 2001 [12–14]. More recently, OsHV-1 resistance was found to be a highly heritable trait in C. gigas spat under field and laboratory conditions [11, 15]. However, experimental selective breeding programs focused on V. aestuarianus resistance have not been described to date, and a relationship between resistance to OsHV-1 infection and V. aestuarianus infection has not been reported. One hypothesis to explain disease resistance could be linked to host defenses, such as the immune capacity. Previous works identified markers for oyster survival capacity [16–19]. These studies led to the identification of a set of genes whose expression was either up-regulated in oysters able to survive virulent Vibrio infection [16, 20] or differentially regulated in the hemocytes of oysters with a high capacity for survival [18]. The objective of this study was to investigate: (1) the resistance to V. aestuarianus infection in C. gigas at the juvenile and adult stages under laboratory-controlled conditions using two stocks of oysters and (2) to analyze the association of the survival capacity with the basal expression levels of a selection of immune-related genes. For each stock, a control line and a line selected for higher survival at the spat stage under field conditions (which was also related to higher resistance to OsHV-1 infection) were evaluated. First, experimental OsHV-1 infection was performed to confirm the level of resistance of each line to the viral infection. Then, two approaches were used to test for resistance to V. aestuarianus infection. The first approach challenged the oysters in primary exposures at the juvenile stage or the adult stage to determine the level of resistance according to the size and/or age of the oysters. The second approach used successive infections at the adult stage with the survivors of the previous experimental infections to determine whether the survivors became resistant to the bacterial infection. Finally, we evaluated the immune status of non-stimulated oysters before the onset of bacterial infection under laboratory conditions. A mass selection to increase survival in C. gigas was performed in two stocks (named A and B) of wild oysters sampled from two sites in the Marennes-Oléron Bay (Charente Maritime, France) in 2008. For each line, a base population G0 was produced in 2009, and a sub-sample was kept in our facilities to avoid disease-related mortality and to produce the control line of the following generation (G1-C). This control allowed the assessment of the effects of changing environmental conditions during the course of the experiment to estimate the response to selection [11]. The other sub-sample of oysters was deployed in the field, where mortality outbreaks caused by OsHV-1 were routinely observed each year since 2009 [21]. Then, the survivors were spawned in 2010 to produce the selected line G1-S. The same approach was used in February 2011 and March 2012 to produce G2 and G3, respectively. Four sub-lines were produced for the selected line from G2; further details are given in [11]. The oysters used in this study were the control lines AC and BC of G3 and the selected lines AS and BS, which were the best sub-lines for survival and OsHV-1 resistance in the field. The field evaluation of the C and S lines at the spat stage during the summer of 2012 confirmed a higher mortality for the C lines (92.9%) compared with the S lines (32.0%). Nevertheless, the oysters used in our experimental infections were either kept in our inland facilities to avoid disease-related mortality or deployed to the field in October 2012 prior to their evaluation in the laboratory (Figure 2). Summary of the production and exposure to V. aestuarianus by cohabitation challenges. Trials 3 and 4 were performed for the control and selected lines at either the juvenile or adult stages. Light grey boxes indicate primary-challenge and dark grey boxes indicate a second and third exposure to the bacteria. Viral and bacterial suspensions The viral suspension was obtained using the protocol of Schikorski [22]. Briefly, naïve and unselected hatchery-produced oysters were infected by injecting 50 µL of a previous viral suspension after "anesthesia". Dead oysters were dissected; the mantle and gills were removed, pooled, diluted, crushed and filtered using a 0.22-µm filter to obtain a clarified tissue homogenate. The Vibrio strain used in the bacterial challenges was the highly pathogenic strain 02/041 that was isolated during a mortality episode in adults. This strain was previously studied and was included in this study as a reference strain [23]. The Vibrio suspension was obtained from an isolate maintained at −80 °C. The bacterial strain was placed in liquid Zobell and incubated for 24 h at 20 °C with constant shaking at 20 rpm. The resulting solution was centrifuged at 3200 × g for 10 min. The supernatant was discarded, and the pellet was washed and suspended in sterile artificial sea water (SASW). Mortality induction protocols Two types of experimental infection protocols were used to evaluate disease resistance in C. gigas: a by-injection protocol and a by-cohabitation protocol. For all trials, the seawater was filtered, UV treated and maintained at 21 °C with adequate aeration and without the addition of food. For the large volume tanks (150 L), a recirculating system was used to optimize the horizontal transmission of the disease. The salinity ranged between 29.5 and 36.7% for all trials. For the by-injection protocol, pathogenic agents were directly injected into the adductor muscles of oysters to test their disease resistance. First, oysters were "anesthetized" in a solution containing magnesium chloride (MgCl2, 50 g/L) in a mixture of seawater and distilled water (1:4, v:v) for 4 h. Subsequently, 50 µL of the infectious solution (bacterial or viral suspension) was injected into the adductor muscle using a 1 mL micro-syringe equipped with an 18 g needle. The injected oysters were either naïve oysters of the selected or control lines or naïve unselected hatchery-produced oysters that were used as "sources" for the horizontal transmission of the disease to the selected and control lines through a by-cohabitation protocol. For the by-cohabitation components, we used the protocols previously described in [24, 25]. As described for the by-injection protocol, naïve and unselected hatchery-produced oysters were injected with a specific pathogen and then transferred into tanks for 24 h. Then, they were placed in contact with the naïve oysters of the selected and control lines to test their disease resistance. A ratio of 10 g of injected oysters (with the shell) per 10 L of sea water was used for all of the experiments. A dead oyster was defined as a moribund animal that was unable to close its valve after 5 min out of the water. Trial 1: experimental infection by cohabitation between oysters injected with OsHV-1 and the selected and control lines An experimental infection with OsHV-1 was performed in April 2013 to verify the higher resistance to OsHV-1 infection of the selected lines AS and BS compared with the control lines AC and BC. The oysters were 13 months old, and the mean individual weight was 22 g (Table 1). The AS, BS, AC and BC lines were evaluated throughout the cohabitation with oysters injected with a viral suspension as described above. For each line (AS, BS, AC and BC), four 5 L replicate tanks were used; each tank contained 10 oysters (Table 1). For three replicates, 4 oysters injected with the viral suspension were added to each tank for 48 h. In the fourth tank, 4 oysters injected with SASW were added for 48 h. The mortality was recorded daily for 11 days. Table 1 Summary of the trials and sets to evaluate OsHV-1 and V. aestuarianus susceptibility. Trial 2: experimental infection by injection of the selected and control lines with V. aestuarianus The design of this trial consisted of intramuscular injection of the oysters with suspensions with different bacterial concentrations. The bacterial concentration was evaluated spectrometrically at 600 nm and adjusted to an optical density (OD) = 1; then, the suspension was serially diluted to obtain theoretical ODs of 0.0002, 0.002, 0.02 and 0.2, corresponding to 104, 105, 106 and 107 bacteria per mL, respectively. The bacterial concentration and purity were verified by plating. Three 5 L tanks were used for each OD and each line; each tank contained ten oysters injected with 50 µL of V. aestuarianus (500 CFU at OD 104 and 0.5 million CFU at OD 107). For each line, an additional tank was used as a control; this tank contained 10 oysters injected with SASW. Observations for mortality were performed daily for 6 days. Trial 3: primary infection by cohabitation between oysters injected with V. aestuarianus and the selected and control lines Five sets of trials using a by-cohabitation protocol were used to better mimic natural infection. For each trial (with the exception of set 3), three 150 L tanks were used to challenge a larger number of larger animals at the same time. An additional tank was used for the controls, which consisted of oysters intramuscularly injected with SASW and placed as sources in contact with the four lines (Table 1). In each tank, the oysters of the four lines were randomly placed together in the same tank with 25 oysters per line with the shells individually tagged for identification. For set 3 of trial 3, two 10 L tanks were used per line; each tank contained approximately 15–20 oysters (Table 1). The five sets of trial used to evaluate the resistance of the four lines AS, AC, BS and BC to V. aestuarianus are summarized in Table 1 and Figure 2: Three sets were performed in February and March 2013. The naïve oysters were 11–13 months old and weighed 22 g (Table 1), which corresponded to the juvenile stage according to the oyster industry. All oysters were always kept in our facilities, and no mortality was recorded. The fourth set was conducted during spring 2014. The oysters were 26 months old, and the mean individual weight was 120 g (Table 1), which corresponded to the adult stage. The oysters were always kept in our facilities, and no mortality was recorded. The fifth set was conducted during the fall of 2014 with oysters kept for 2 years in the field at Agnas (Charente Maritime, France). The control and selected oyster lines experienced 70 and 34% mortality, respectively. The oysters were 32 months old, and the mean individual weight was 100 g. Trial 4: successive infections by cohabitation of oysters injected with V. aestuarianus and the selected and control lines All oysters that survived sets 1–3 in trial 3 were again challenged in trial 4 with two additional successive challenges in May 2014 and November 2014 (Figure 2). Between trial 3 and the first set in trial 4 and between the two sets of trial 4, the oysters were kept at the Ifremer facilities in La Tremblade. All effluent from the holding facilities was treated with chlorine. The occurrence of mortality was also recorded during these periods. In set 1 of trial 4, the oysters were 26 months old with an average weight of 100 g, whereas in set 2 of trial 4 the oysters were 32 months old with an average weight of 170 g (Table 1). Detection of OsHV-1 and V. aestuarianus DNA For all of the trials, moribund oysters from the selected and control lines were sampled for the detection of OsHV-1 and V. aestuarianus DNA. Total DNA was extracted from tissue fragments (mantle + gills) using the QIAgen (Hilden, Germany) QIAamp tissue mini kit combined with the QIAcube automated system according to the manufacturer's protocol. The total DNA amount was adjusted to 5 ng/µL following Nanodrop (Thermo Scientific) measurement. A real-time PCR assay was conducted on the MX3000 and MX3005 Thermocyclers (Agilent) using the Brilliant III Ultrafast kit (Stratagene). Each reaction was run in duplicate in a final volume of 20 µL containing the DNA sample (5 µL at a 5 ng/µL concentration), 200 nM of each primer (for OsHV-1, DPF 5′ ATT GAT GATGTG GAT AAT CTG TG 3′ and DPR 5′ GGT AAA TAC CAT TGG TCT TGTTCC 3′ [26] and for V. aestuarianus, DNAj-F 5′ GTATGAAATTTTAACTGACCCACAA3′ and DNAj-R 5′ CAATTTCTTTCGAACAACCAC 3′ [27]) and 200 nM of an oligonucleotide probe (for V. aestuarianus DNAj, probe 5′ TGGTAGCGCAGACTTCGGCGAC). The real-time PCR cycling conditions were as follows: 3 min at 95 °C, followed by 40 cycles of amplification at 95 °C for 5 s and 60 °C for 20 s. For OsHV-1 DNA quantification, melting curves were also plotted (55-95 °C) to ensure that a single PCR product was amplified for each set of primers. Negative controls (without DNA) were included. Evaluation of the immune status of the selected and control lines The immune statuses of the oyster lines were evaluated based on the expression of immune-related genes in the AS, AC, BS and BC lines prior to trial 3 set 1 at 12 months and trial 3 set 5 at 32 months. Immune-related genes were selected based on previous studies showing their transcriptomic regulation following vibrio challenge or between oyster lines selected for their resistance/sensitivity to in situ mortality [16, 18]. Oysters were removed from their shells, and the whole soft body was immediately plunged into liquid nitrogen. Then, the oysters were pulverized in groups (three groups of 10 oysters per oyster line) with a Mixer Mill MM 400 (Retsch) under liquid nitrogen conditions. The frozen oyster powder was stored at −80 °C prior to RNA extraction for gene expression analysis. RNA extraction from the frozen oyster powder was performed with the TRIzol Reagent (Invitrogen) according to the manufacturer's instructions. Briefly, 100 mg of oyster powder was homogenized in 1 mL of TRIzol by vortexing for 1 h at 4 °C. Next, the RNA samples were treated with 5 U of DNase I (Invitrogen) to eliminate DNA contamination according to the manufacturer's instructions, followed by RNA precipitation to eliminate the degraded DNA (with 100% isopropyl alcohol and 3 M Na-acetate). Then, the RNA samples were dissolved in 50 µL of RNase-free water and quantified using a NanoDrop spectrophotometer (Thermo Scientific). The integrity of the total RNA was verified using 1.5% agarose gel electrophoresis. Finally, total RNA was reverse transcribed using the Moloney Murine Leukemia Virus Reverse Transcriptase (MMLV-RT) according to the manufacturer's instructions (Invitrogen). qPCR assays were performed on the Light-Cycler 480 System (Roche) in a final volume of 5 µL containing 1× Light-Cycler 480 master mix, 0.5 μM of each primer and 1 μL of cDNA diluted 1/8 in sterile ultra-pure water. The primer pairs used to amplify the 31 immune-related genes are listed in Table 2. Primer pair efficiencies (E) were calculated by five serial dilutions of pooled cDNA ranging from 1/2 to 1/64 in sterile ultra-pure water using the slopes provided by the LightCycler software according to the equation: E = 10[−1/slope]. The qPCR program was composed of an initial denaturation step of 15 min at 95 °C, followed by amplification of the target cDNA (35 cycles of denaturation at 95 °C for 10 s, annealing at 57 °C for 20 s and extension at 72 °C for 25 s with fluorescence detection). Relative expression levels of the immune-related genes were calculated with the method described by Pfaffl [28] and normalized using the mean of values of three constitutively expressed genes (Cg-EF1 [GenBank AB122066], Cg-RPL40 [GenBank FP004478] and Cg-RPS6 [GenBank HS119070]). Table 2 Primers and functional categories of the analyzed immune-related genes. Survival was analyzed with the SAS 9 software using the GLIMMIX procedure by a logistic regression for binomial data. The general model for the first trial was: $$Y_{i} = Logit\left( \pi \right) = \ln \frac{\pi }{1 - \pi } = \mu + stock_{i} + selection_{j} + stock_{i} *selection_{j} + \varepsilon$$ where Y i was the survival probability, µ was the intercept, stock i represented stock A or line B, and selection j represented the level of selection for OsHV-1 resistance (selected or control). For trial 2, the bacterial concentration factor and all interactions between the bacterial concentrations, the stock and the selection factors were added to the model. For the first three sets of challenges in trial 3, the set factor and its interactions were added to the model. A similar model was also used for the last two sets of challenges in trial 4. For the gene expression analysis, statistical analysis was performed using the STATISTICA software version 7.1 (StatSoft) using the Mann–Whitney U test (significant value: p < 0.05). Hierarchical clustering of the gene expression data was performed with the Multiple ArrayViewer software using the average linkage clustering with the Spearman Rank Correlation as the default distance metric. No mortality occurred in the control tanks for each line (AC, AS, BC and BS). The first mortality occurred on day 2, and a peak of mortality was observed on day 5. As expected, the selected oysters presented low mortality (3 and 7% for AS and BS, respectively), whereas the control oysters had significantly higher mortality (60 and 67% for AC and BC, respectively, p < 0.001). The mean mortality was 32 and 37% for stocks A and B, respectively. The mortality between these stocks was not significantly different. Moreover, no difference was observed for the interaction between the stocks and the level of selection for resistance to OsHV-1 infection (p > 0.05). High amounts of OsHV-1 DNA was detected in all of the moribund oyster samples analyzed (n = 31). No mortality was observed for oysters injected with SASW in the control tanks. The mortality rates of each line at each injected dose are shown in Figure 3. Very high mortality ranging from 77 to 100% was observed for all lines regardless of the infectious dose. None of the factors were significant with the exception of the bacterial concentration factor (p = 0.0003) and the interaction between the stock and level of selection (p = 0.0005) (Table 3). At the stock level, the selected line exhibited lower mortality compared to the control line for stock B at each bacterial concentration; the opposite effect was observed for stock A at the two lowest concentrations (Figure 3). The mortality at the lowest bacterial concentration was significantly lower compared to the morality at the other concentrations (p < 0.0001). Mean mortality (sd between tanks) in Trial 2. Oyster lines were challenged at the juvenile stage via intramuscular injection of V. aestuarianus for the control (AC and BC) and selected lines (AS and BS) at different bacterial concentrations. Table 3 Logit analysis of mortality in Trial 2. Trial 3: primary infection by cohabitation between oysters injected with V. aestuarianus and the selected and control lines at the juvenile and adult stages Primary exposure with the V. aestuarianus cohabitation protocol at the juvenile stage (sets 1 to 3) No mortality occurred in the control tanks of each set of trial 3. All moribund oysters sampled were positive for V. aestuarianus DNA (n = 45). The mortality of each line at the endpoint for the first three sets of trial 3 (corresponding to the juvenile stage) is presented in Figure 4. In contrast to the by-injection protocol, higher variability in mortality was observed among the lines, with a range from 6 to 90%. A significant interaction was found between the sets and selection (p = 0.0008) (Table 4); this interaction was explained at the selection level, with higher mortality observed in set 1 compared to set 3 for the selected lines and the highest mortality observed in the control lines in set 3 (Figure 4). The mean mortalities of the three sets were 14 and 33% for the AS and BS lines, respectively; this mortality was significantly lower than the mortality observed for the control lines (71 and 80% for AC and BC, respectively, p < 0.0001) (Table 4). To a much lesser extent, stock A had significantly lower mortality (43%) than stock B (53%) (p = 0.0065). Mean mortality (sd between tanks) in Trial 3 sets 1 to 3. Oyster lines were challenged at the juvenile stage via cohabitation challenges between oysters injected with V. aestuarianus and healthy juveniles of the control (AC and BC) and selected lines (AS and BS). Table 4 Logit analysis of mortality in Trial 3 sets 1 to 3. Primary exposure with the V. aestuarianus cohabitation protocol at the adult stage (sets 4 and 5) For the fourth set of trial 3, the mortality was 87, 45, 62 and 70% for the AS, AC, BS and BC lines, respectively; the mortality decreased to 49, 32, 36 and 53%, respectively, in the fifth set of challenges in trial 3 (Figure 5). None of the factors were significant with the exception of a significantly lower mortality in the fifth set compared to the fourth set (p = 0.0002) and a significant interaction between the stocks and level of selection (p = 0.0012) (Table 5). At the stock level, the stock A control line had significantly lower mortality than the selected line, whereas the opposite trend was observed for stock B (Figure 5). Mean mortality (sd within l) in Trial 3 sets 4 and 5. Oyster lines were challenged at the juvenile stage for the three first sets of trial 3 and at the adult stage for sets 4 and 5 of trial 3 via cohabitation challenges between oysters injected with V. aestuarianus and healthy juveniles of the control (AC and BC) and selected lines (AS and BS). Table 5 Logit analysis of mortality in Trial 3 set 4 and 5. Expression levels of immune-related genes can discriminate oyster lines in terms of susceptibility/resistance to V. aestuarianus infection at the juvenile and adult stages The hierarchical clustering of the 31 immune-related genes or only the differentially expressed genes (p < 0.05) could separate the oyster lines in terms of their resistance/sensitivity to bacterial infection at the two developmental stages analyzed (12 and 32 months, Figure 6). At 12 months of age, hierarchical clustering of the gene expression data separated the oyster lines into two major clusters of conditions: the first cluster included the AS and BS lines, while the second cluster included the AC and BC lines (Figure 6A). At 32 months of age, two major clusters of conditions were found that were similar to those observed for the 12 month old oysters, but the oyster lines did not separate in the same manner (Figure 6B): the first cluster included the AC and BS lines, whereas the second cluster included the AS and BC lines. Interestingly, these discriminations of the oyster lines were in accordance with the resistance/sensitivity to infection of the lines at 12 and 32 months of age. Discrimination of oyster lines contrasted in term of susceptibility based on the expression levels of immune-related genes. A Hierarchical clustering of the relative expression levels of 31 immune-related genes in non-stimulated oysters of the AC, AS, BC and BS lines (three groups of ten oysters per line) at 12 months of age. B Hierarchical clustering of the relative expression levels of the 11 differentially expressed genes in non-stimulated oysters of the AC, AS, BC and BS lines (three groups of ten oysters per line) at 12 months of age. C Hierarchical clustering of the relative expression levels of 31 immune-related genes in non-stimulated oysters of the AC, AS, BC and BS lines (three groups of ten oysters per line) at 32 months of age. D Hierarchical clustering of the relative expression levels of the 7 differentially expressed genes in non-stimulated oysters of the AC, AS, BC and BS lines (three groups of ten oysters per line) at 32 months of age. The intensity of the color (from green to red) indicates the magnitude of differential expression (see color scale at the bottom of the image). The dendrogram at the left of the figures indicates the relationship among samples from the oyster lines, whereas the dendrogram at the top of the figures indicates the relationship among the relative expression levels of the selected genes. In the juveniles, 11 genes showed differential gene expression patterns, while in the adults only 7 genes showed differential expression patterns. Among these differentially expressed genes, four genes were common to juveniles and adults (306, 283, 284 and 304). These four genes showed the same patterns of expression in the juveniles and adults according to their resistance/sensitivity to infection. Thus, genes expressed at higher levels in the susceptible lines (AC and BC) vs. the resistant lines (AS and BS) in juveniles also appeared to be expressed at higher levels in the susceptible lines (AC and BS) vs. the resistant lines (AS and BC) in adults (genes 283, 284 and 304). Likewise, the gene expressed at a higher level in the resistant lines (AS and BS) vs. the susceptible lines (AC and BC) in juveniles also appeared to be expressed at a higher level in the resistant lines (AS and BC) vs. the susceptible lines (AC and BS) in adults (gene 306). Other differentially expressed genes appeared to be associated with one developmental stage: 7 genes were found to be differentially expressed only in juveniles (216, 8, 420, 401, 220, 396 and 441), and 3 genes were found to be differentially expressed only in adults (189, 234 and 378). Trial 4: successive infections by cohabitation between oysters injected with V. aestuarianus and the selected and control lines After the three first sets of bacterial challenges of trial 3, the final mortality was 14, 71, 33 and 80% for AS, AC, BS and BC, respectively. During the period between trials 3 and 4, the survivors were kept in a tank with filtered and UV-treated seawater enriched with microalgae. Although this period did not represent a disease challenge, the survivors still experienced significant mortality associated with the detection of V. aestuarianus DNA in the moribund oysters. Most of the mortality was observed in July 2013 after a spawning event. The mortality between trial 3 and the first set of challenges of trial 4 was 74, 90, 32 and 97% for AS, AC, BS and BC, respectively (Table 6); consequently, the cumulative mortality due to V. aestuarianus before trial 4 reached 78, 96, 54 and 99%, respectively. Although the control lines were tested in trial 4, the remaining oysters numbered less than 5. Therefore, the mortality was not compared with the selected lines. During the first set of challenges in trial 4, the mortality was 46 and 28% for AS and BS, respectively (Table 5). No significant difference in mortality (<5%) was reported between the two sets of challenges in trial 4. Finally, the survivors exhibited some mortality, with 60 and 38% mortality for AS and BS, respectively (Table 6). The cumulative mortality after three successive challenges with V. aestuarianus in trials 3 and 4 (including the mortality between trials) was 96, 99, 84 and 100% for AS, AC, BS and BC, respectively (Table 6). Table 6 Mortality rates per line during three successive challenges. While mortality related to OsHV-1 and V. aestuarianus was reported in C. gigas in France prior to 2008 [27, 29], their impact on French oyster production became predominant due to recurrent and intense mortality in spat and adult oysters. While selective breeding to enhance resistance to OsHV-1 infection in C. gigas has been recently demonstrated [11, 15], this demonstration is under investigation for V. aestuarianus. Nevertheless, this study aimed to elucidate whether selective pressure exerted by viral infections in the field could impact the susceptibility of C. gigas to bacterial infection. In trial 1, the selected lines of both stocks exhibited much lower mortality (AS 3% and BS 7%) than the control lines (AC 60% and BC 67%). Although higher mortality was observed in the field evaluation (50 and 35% for AS and BS, respectively, and 91 and 94% for AC and BC, respectively [11]), our result was consistent with the field evaluation and supported that selection to enhance survival in C. gigas spat was effective for herpes virus infection. The lower mortality observed in our study could be explained by the use of larger (20 g versus <7 g) and older (13 months old versus <5 months old) oysters (juvenile versus spat) because OsHV-1 resistance increased with age and size [21]. However, the most important information from trial 1 was the confirmation that AS and BS had higher resistance to OsHV-1 infection than AC and BC before their evaluation of exposure to V. aestuarianus. In trial 2, the mortality of three of the lines reached 100%. The mortality for BS was >90% at higher bacterial concentrations corresponding to 5 × 104 and 5 × 105 CFU per oyster (Figure 3). Although the mortality was slightly lower at the lowest bacterial concentration (corresponding to 500 CFU per oyster and ranging from 76 to 96%; Figure 3), this finding confirmed that strain 02/041 was a highly virulent strain in C. gigas juveniles (20 g); this finding was recently demonstrated in spat weighing 1.5 g that exhibited high mortality (>75%) at doses of 102 and 107 CFU per spat [30]. Consequently, selective breeding to enhance higher resistance to OsHV-1 at the spat stage but does not confer higher resistance to V. aestuarianus infection at the juvenile stage. Injection of the bacteria directly into the adductor muscle may bypass the oysters' natural barriers to infection by V. aestuarianus. Due to the high mortality observed for all lines when the injection method is used, the cohabitation method should be applied to evaluate the resistance to V. aestuarianus infection of the control and the selected lines because transmission of the bacteria between oysters is what is expected to occur under natural conditions. The three sets of trial 3 were performed at the juvenile stage, and all exhibited the same mortality patterns. The main finding revealed that the selected lines AS and BS had lower mortality than the control lines AC and BC (Figure 4). This result suggested that selection to increase survival at the spat stage in the field was also efficient in enhancing dual resistance to OsHV-1 infection and V. aestuarianus infection at the juvenile stage. A similar result was observed in Crassostrea virginica for dual resistance to Haplosporidium nelsoni and Perkinsus marinus [31], but most studies revealed that breeding for higher resistance to a disease did not confer a higher resistance to another disease [32]. Controversially, this pattern was not found at the adult stage, where much higher mortality was observed for both of the selected lines (particularly the AS line). Indeed, line AS exhibited 87% mortality in set 4 of trial 3, whereas the control line AC had lower mortality (45%) (Figure 5). Additionally, while the AS line had higher mortality in adults than in juveniles, the AC line had lower mortality at the adult stage (45%) than at the spat stage (71%) (Figure 5). This same pattern was observed for BS and BC to a lesser extent, except that BS had lower mortality than BC at the adult stage (Figure 5). A similar pattern was observed in set 5 of trial 3, although the mortality was lower than the mortality recorded in set 4 of trial 3. For set 5 of trial 3, it is important to note that the oysters were survivors of field mortality events that could have been related to OsHV-1 and/or V. aestuarianus and/or other pathogens. Consequently, the survivors used in set 5 were likely to be genetically more resistant than the naïve oysters used in set 4, as demonstrated for the summer mortality phenomenon in C. gigas [33]. Another hypothesis could be related to the reproductive status. Sets 4 and 5 of trial 3 occurred in May and November of 2014, which represented the pre- and post-spawning periods, respectively. Previous experiments have shown that the active gametogenesis period corresponds to higher susceptibility to vibriosis in mollusks [34–37]. Consequently, primary infection of C. gigas with V. aestuarianus by cohabitation showed a different mortality pattern according to the stage of development and the level of selection. Hence, evaluation of vibriosis resistance in C. gigas represents a complex interaction between the genotype and the stage of development, and therefore the size, reproductive status and age of the oysters as described for OsHV-1 in C. gigas [21, 33, 38]. Our study also revealed that selecting for resistance to OsHV-1 infection in spat did not confer either higher resistance or susceptibility to V. aestuarianus infection in adults, which was in agreement with similar studies in oyster species [39–41]. Experimental studies working on V. aestuarianus should replicate the oyster genotypes. The difference in mortality between the four lines used in our study also suggested a genetic basis for V. aestuarianus resistance in C. gigas, but this speculation required further investigation. Our results showed high variability in the expression of selected immune-related genes that was dependent on the animal batch and age. This variability allowed the discrimination of oyster batches and correlated with their sensitivity to bacterial infection. Interestingly, expression of this set of immune-related genes was correlated with sensitivity to vibriosis rather than the genetic background. Because sensitivity to infection evolved depending on the oyster stage tested, the clustering of oyster batches also evolved in an age-dependent manner. At the juvenile stage, the lines selected for their resistance to OsHV-1 infection that presented low sensitivity to vibriosis were clearly discriminated from the control lines that presented higher sensitivity to V. aestuarianus infection. At the adult stage, selected line A and control line B showed intermediate sensitivity to vibriosis and were discriminated from control line A and selected line B with higher sensitivity to V. aestuarianus infection. Our results showed for the first time the possibility of using gene expression analysis to discriminate between oyster lines according to the resistance/susceptibility at two different developmental stages independent of the genetic background of the oyster lines. Specifically, four genes discriminated between the oyster lines according to their resistance/susceptibility. These four genes are associated with different immune functions and suggest a complex discrimination of oyster lines through their immune status. The four genes able to discriminate oyster lines are related to antimicrobial functions (the proline rich peptide Cg-prp [42]), anti-oxidative responses (the extracellular superoxide dismutase Cg-SOD [43]), cell adhesion (the Integrin beta-PS [44]) and recognition molecules (the L-rhamnose-binding lectin [45]). These results show that it is now necessary to develop global transcriptomic approaches to clearly elucidate the transcriptomic basis of the resistance/susceptibility of oysters. Finally, trial 4 was designed to test the effect of successive challenges using survivors of a previous challenge. The survivors of an initial exposure to V. aestuarianus still exhibited significant mortality in response to the same pathogenic agent at the second exposure. Consequently, the survivors were not genetically resistant, but they were either less susceptible during the previous exposures or the infection cohabitation did not allow equal expose of the oysters to the bacteria. Thus, a first contact with V. aestuarianus is not protective. This mortality pattern was also found in the abalone Haliotis tuberculata during two successive infections by the pathogen Vibrio harveyi [46]. Our result contrasted with the results obtained for the summer mortality phenomenon and OsHV-1, for which the survivors were selected for resistance and exhibited low mortality the following year [33, 38]. Between trials 3 and 4, mortality due to V. aestuarianus was mostly observed after a spawning event, thereby reinforcing the importance of the reproductive status on the resistance to the bacteria. Post-spawning oysters were much more susceptible to the disease, as demonstrated with the mortality event due to opportunistic Vibrio sp in C. gigas [34, 37, 47] and OsHV-1 in C. gigas that occurred a couple of days after spawning [48]. Otherwise, the cumulative mortality after three successive exposures to V. aestuarianus was very high for all lines (ranging from 84 to 100%) (Table 6). These mortality rates are extremely concerning for French oyster farmers, who cannot continue to remain feasible with this level of loss of C. gigas in their oyster stocks. In conclusion, our study showed that: (1) cohabitation between injected oysters and healthy oysters seemed to be preferable for the genetic evaluation of V. aestuarianus resistance in C. gigas compared to intramuscular injection; (2) the mortality pattern for primary exposure to V. aestuarianus at the juvenile stage was similar to the pattern observed for OsHV-1 infection, with a higher resistance in selected oysters than control oysters, which suggested dual resistance at the juvenile stage; (3) differences in the mortality patterns were highlighted between juveniles and adults during primary infection, suggesting a complex interaction between the genotype and the stage of development for Vibrio sensitivity; and (4) selection of immune-related genes allowed for the discrimination of batches depending on their sensitivity to infection at the two stages tested rather than on their genotype. The differences in mortality among the lines also suggested a genetic basis for the resistance to V. aestuarianus infection. Similarly, selection to enhance OsHV-1 resistance did not confer increased susceptibility or resistance to V. aestuarianus infection. Therefore, to improve resistance to V. aestuarianus infection, a breeding program needs to use high intensities of selective pressure. Resistance to V. aestuarianus infection should be evaluated through successive exposures to the disease because the oysters remained susceptible to V. aestuarianus even if they survived one or several mortality outbreaks related to the disease. Breeding companies interested in enhancing Vibrio resistance should use oysters that were previously selected for resistance to OsHV-1 infection at the spat stage for broodstock. Then, these broodstocks should be evaluated at the adult stage to combine the resistance traits. Comps M, Duthoit J-L (1976) Infection virale associée à la "maladie des branchies" de l'huître portuguaise Crassostrea angulata Lmk. CR Acad Sc Paris Série D 283:1595–1597 Grizel H (1985) Etude des récentes épizooties de l'huître plate (Ostrea edulis Linné) et leur impact sur l'ostréiculture bretonne. Université des Sciences et Techniques du Languedoc, Montpellier, p 145 Grizel H, Héral M (1991) Introduction into France of the Japanese oyster (Crassostrea gigas). I C E S J Mar Sci 47:399–403 Murray AG, Marcos-Lopez M, Collet B, Munro LA (2012) A review of the risk posed to Scottish mollusc aquaculture from Bonamia, Marteilia and oyster herpesvirus. Aquaculture 370:7–13 Martenot C, Fourour S, Oden E, Jouaux A, Travaille E, Malas JP, Houssin M (2012) Detection of the OsHV-1 mu Var in the Pacific oyster Crassostrea gigas before 2008 in France and description of two new microvariants of the Ostreid Herpesvirus 1 (OsHV-1). Aquaculture 338:293–296 Ségarra A, Pépin JF, Arzul I, Morga B, Faury N, Renault T (2010) Detection and description of a particular Ostreid herpesvirus 1 genotype associated with massive mortality outbreaks of Pacific oysters, Crassostrea gigas, in France in 2008. Virus Res 153:92–99 Francois C, Joly J-P, Garcia C, Lupo C, Travers M-A, Pépin J-F, Hatt P-J, Arzul I, Omnes E, Tourbiez D, Faury N, Haffner P, Huchet E, Dubreuil C, Chollet B, Renault T, Cordier R, Hébert P, Le Gagneur E, Parrad S, Gerla D, Annezo J-P, Terre-Terrillon A, Le Gal D, Langlade A, Bedier E, Hittier B, Grizon J, Chabirand J-M, Robert S, Seugnet J-L, Rumebe M, Le Gall P, Bouchoucha M, Baldi Y, Masson J-C (2013) Bilan 2012 du réseau REPAMO—Réseau national de surveillance de la santé des mollusques marins Francois C, Joly J-P, Garcia C, Lupo C, Travers M-A, Tourbiez D, Chollet B, Faury N, Haffner P, Dubreuil C, Serpin D, Renault T, Cordier R, Hébert P, Le Gagneur E, Parrad S, Gerla D, Cheve J, Penot J, Le Gal D, Lebrun L, Le Gac-Abernot C, Langlade A, Bédier E, Palvadeau H, Grizon J, Chabirand J-M, Robert S, Seugnet J-L, Rumebe M, Le Gall P, Bouchoucha M, Baldi Y, Masson J-C (2014) Bilan 2013 du réseau REPAMO—Réseau national de surveillance de la santé des mollusques marins Francois C (2015) Bilan 2014 du réseau REPAMO—Réseau national de surveillance de la santé des mollusques marins Pépin J-F, Soletchnik P, Robert S (2014) Mortalités massives de l'Huître creuse. Synthèse—Rapport final des études menées sur les mortalités de naissains d'huîtres creuses C. gigas sur le littoral charentais pour la période de 2007 à 2012 Dégremont L, Nourry M, Maurouard E (2015) Mass selection for survival and resistance to OsHV-1 infection in Crassostrea gigas spat in field conditions: response to selection after four generations. Aquaculture 446:111–121 Dégremont L (2011) Evidence of herpesvirus (OsHV-1) resistance in juvenile Crassostrea gigas selected for high resistance to the summer mortality phenomenon. Aquaculture 317:94–98 Dégremont L, Bédier E, Boudry P (2010) Summer mortality of hatchery-produced Pacific oyster spat (Crassostrea gigas). II. Response to selection for survival and its influence on growth and yield. Aquaculture 299:21–29 Dégremont L, Soletchnik P, Boudry P (2010) Summer mortality of selected juvenile Pacific oyster Crassostrea gigas under laboratory conditions and in comparison with field performance. J Shellfish Res 29:847–856 Dégremont L, Lamy J-B, Pépin J-F, Travers M-A, Renault T (2015) New insight for the genetic evaluation of resistance to ostreid herpesvirus infection, a worldwide disease, in Crassostrea gigas. PLoS One 10:e0127917 de Lorgeril J, Zenagui R, Rosa RD, Piquemal D, Bachère E (2011) Whole transcriptome profiling of successful immune response to Vibrio infections in the oyster Crassostrea gigas by digital gene expression analysis. PLoS One 6:e23142 Rosa RD, de Lorgeril J, Tailliez P, Bruno R, Piquemal D, Bachère E (2012) A hemocyte gene expression signature correlated with predictive capacity of oysters to survive Vibrio infections. BMC Genomics 13:252 Schmitt P, Santini A, Vergnes A, Dégremont L, de Lorgeril J (2013) Sequence polymorphism and expression variability of Crassostrea gigas immune related genes discriminate two oyster lines contrasted in term of resistance to summer mortalities. PLoS One 8:e75900 Zhang L, Li L, Zhang G (2011) A Crassostrea gigas Toll-like receptor and comparative analysis of TLR pathway in invertebrates. Fish Shellfish Immunol 30:653–660 Rosa RD, Santini A, Fievet J, Bulet P, Destoumieux-Garzon D, Bachère E (2011) Big defensins, a diverse family of antimicrobial peptides that follows different patterns of expression in hemocytes of the oyster Crassostrea gigas. PLoS One 6:e25594 Dégremont L (2013) Size and genotype affect resistance to mortality caused by OsHV-1 in Crassostrea gigas. Aquaculture 416:129–134 Schikorski D, Renault T, Saulnier D, Faury N, Moreau P, Pépin JF (2011) Experimental infection of Pacific oyster Crassostrea gigas spat by ostreid herpesvirus 1: demonstration of oyster spat susceptibility. Vet Res 42:27 Saulnier D, De Decker S, Haffner P, Cobret L, Robert M, Garcia C (2010) A large-scale epidemiological study to identify bacteria pathogenic to Pacific oyster Crassostrea gigas and correlation between virulence and metalloprotease-like activity. Microbial Ecol 59:787–798 Schikorski D, Faury N, Pépin JF, Saulnier D, Tourbiez D, Renault T (2011) Experimental ostreid herpesvirus 1 infection of the Pacific oyster Crassostrea gigas: kinetics of virus DNA detection by q-PCR in seawater and in oyster samples. Virus Res 155:28–34 Webb SC, Fidler A, Renault T (2007) Primers for PCR-based detection of Ostreid herpes virus-1 (OsHV-1): application in a survey of New Zealand molluscs. Aquaculture 272:126–139 Pfaffl MW (2001) A new mathematical model for relative quantification in real-time RT–PCR. Nucleic Acids Res 29:e45 Garcia C, Thébault A, Dégremont L, Arzul I, Miossec L, Robert M, Chollet B, Francois C, Joly J-P, Ferrand S, Kerdudou N, Renault T (2011) Ostreid herpesvirus 1 detection and relationship with Crassostrea gigas spat mortality in France between 1998 and 2006. Vet Res 42:73 Goudenège D, Travers M-A, Lemire A, Petton B, Haffner P, Labreuche Y, Tourbiez D, Mangenot S, Calteau A, Mazel D, Nicolas J-L, Jacq A, Le Roux F. A single regulatory gene is sufficient to alter Vibrio aestuarianus pathogenicity in oysters. Environ Microbiol (in press) Ragone Calvo LMR, Calvo GW, Burreson EM (2003) Dual disease resistance in a selectively bred Eastern oyster, Crassostrea virginica, strain tested in Chesapeake Bay. Aquaculture 220:69–87 Dégremont L, Boudry P, Ropert M, Samain JF, Bédier E, Soletchnik P (2010) Effects of age and environment on survival of summer mortality by two selected groups of the Pacific oyster Crassostrea gigas. Aquaculture 299:44–50 De Decker S, Normand J, Saulnier D, Pernet F, Castagnet S, Boudry P (2011) Responses of diploid and triploid Pacific oysters Crassostrea gigas to Vibrio infection in relation to their reproductive status. J Invertebr Pathol 106:179–191 Travers M-A, Le Goïc N, Huchette S, Koken M, Paillard C (2008) Summer immune depression associated with increased susceptibility of the European abalone, Haliotis tuberculata to Vibrio harveyi infection. Fish Shellfish Immunol 25:800–808 Travers MA, Basuyaux O, Le Goic N, Huchette S, Nicolas JL, Koken M, Paillard C (2009) Influence of temperature and spawning effort on Haliotis tuberculata mortalities caused by Vibrio harveyi: an example of emerging vibriosis linked to global warming. Glob Change Biol 15:1365–1376 Wendling CC, Wegner KM (2013) Relative contribution of reproductive investment, thermal stress and Vibrio infection to summer mortality phenomena in Pacific oysters. Aquaculture 412:88–96 Pernet F, Barret J, Le Gall P, Corporeau C, Dégremont L, Lagarde F, Pépin JF, Keck N (2012) Mass mortalities of Pacific oysters Crassostrea gigas reflect infectious diseases and vary with farming practices in the Mediterranean Thau lagoon, France. Aquac Environ Interact 2:215–237 Burreson EM (1991) Susceptibility of MSX-resistant strains of the eastern oyster, Crassostrea virginica, to Perkinsus marinus. J Shellfish Res 10:305–306 Dove MC, Nell JA, O'Connor WA (2013) Evaluation of the progeny of the fourth-generation Sydney rock oyster Saccostrea glomerata (Gould, 1850) breeding lines for resistance to QX disease (Marteilia sydneyi) and winter mortality (Bonamia roughleyi). Aquac Res 44:1791–1800 Frank-Lawale A, Allen SK, Dégremont L (2014) Breeding and domestication of eastern oyster (Crassostrea Virginica) lines for culture in the mid-atlantic, USA: line development and mass selection for disease resistance. J Shellfish Res 33:153–165 Guéguen Y, Romestand B, Fievet J, Schmitt P, Destoumieux-Garzon D, Vandenbulcke F, Bulet P, Bachère E (2009) Oyster hemocytes express a proline-rich peptide displaying synergistic antimicrobial activity with a defensin. Mol Immunol 46:516–522 Gonzalez M, Romestand B, Fievet J, Huvet A, Lebart M-C, Guéguen Y, Bachère E (2005) Evidence in oyster of a plasma extracellular superoxide dismutase which binds LPS. Biochem Biophys Res Commun 338:1089–1097 Jia Z, Zhang T, Jiang S, Wang M, Cheng Q, Sun M, Wang L, Song L (2015) An integrin from oyster Crassostrea gigas mediates the phagocytosis toward Vibrio splendidus through LPS binding activity. Dev Comp Immunol 53:253–264 Roberts S, Goetz G, White S, Goetz F (2009) Analysis of genes isolated from plated hemocytes of the Pacific oyster, Crassostreas gigas. Mar Biotechnol 11:24–44 Travers MA, Meistertzheim AL, Cardinaud M, Friedman CS, Huchette S, Moraga D, Paillard C (2010) Gene expression patterns of abalone, Haliotis tuberculata, during successive infections by the pathogen Vibrio harveyi. J Invertebr Pathol 105:289–297 Li Y, Qin JG, Li XX, Benkendorff K (2009) Spawning-dependent stress responses in pacific oysters Crassostrea gigas: a simulated bacterial challenge in oysters. Aquaculture 293:164–171 Dégremont L, Guyader T, Tourbiez D, Pépin J-F (2013) Is horizontal transmission of the Ostreid herpesvirus OsHV-1 in Crassostrea gigas affected by unselected or selected survival status in adults to juveniles? Aquaculture 408:51–57 Conceived and designed the experiments: PA, MAT and LD. Performed the experiments: PA, DT, MAT, JDL and LD. Analyzed the data: PA, JDL and LD. Wrote the manuscript: PA, MAT, JLD and LD. All authors read and approved the final manuscript. We greatly thank Tristan Renault and Pierre Boudry for their support for the initial aim of this study. We thank the hatchery, nursery and genetic teams of the Laboratory of Genetics and Pathology of Marine Molluscs, Ifremer La Tremblade and Ifremer Bouin, for their assistance in oyster production. We thank the pathology team for technical support for the challenges in laboratory conditions. We gratefully acknowledge Agnès Vergnes for her technical support for the gene expression analyses and Aurélien Brun for his contribution to labeling the oysters. This study was supported by Ifremer through the research activity called "Amélioration par la sélection" and by the French Ministries of Ecology and Agriculture through the research activity called "AESTU". Ifremer, Laboratoire de Génétique et Pathologie des Mollusques Marins, Avenue Mus de Loup, 17390, La Tremblade, France Patrick Azéma , Marie-Agnès Travers & Lionel Dégremont Ifremer, IHPE, UMR 5244, Univ. Perpignan Via Domitia, CNRS, Univ. Montpellier, 34095, Montpellier, France Julien De Lorgeril Search for Patrick Azéma in: Search for Julien De Lorgeril in: Correspondence to Patrick Azéma. Azéma, P., Travers, M., De Lorgeril, J. et al. Can selection for resistance to OsHV-1 infection modify susceptibility to Vibrio aestuarianus infection in Crassostrea gigas? First insights from experimental challenges using primary and successive exposures. Vet Res 46, 139 (2015) doi:10.1186/s13567-015-0282-0 Juvenile Stage Oyster Population Oyster Species Massive Mortality Event
CommonCrawl
Plethysm Plethysm was introduced has an operation on symmetric polynomials by D.E. Littlewood in his paper Polynomial Concomitants and Invariant Matrices, J. London Math. Soc. (1936) s1-11(1): 49-55. doi: 10.1112/jlms/s1-11.1.49, with the notation $\lambda\otimes \mu$ for what we now write $\large s_\mu\circ s_\lambda \qquad {\rm or}\qquad s_\mu[s_\lambda]$. Here, $s_\mu=s_\mu(x_1,x_2,x_3,\ldots)$ stands for the Schur symmetric polynomials. Plethysm of symmetric polynomials is entirely characterized by the fact that it satisfies the following properties (with $p_k=p_k(x_1,x_2,x_3,\ldots)$ standing for the power sum symmetric polynomials): $(f+f')\circ g = (f\circ g)+(f'\circ g)$, $(f\cdot f')\circ g = (f\circ g)\cdot (f'\circ g)$, $p_k\circ (g +g')= (p_k\circ g) + (p_k\circ g')$, $p_k\circ (g \cdot g')= (p_k\circ g) \cdot (p_k\circ g')$, $p_k \circ p_j = p_{kj}$, since any polynomial may be expanded as a linear combination of products of power sum polynomials. If one interprets the symmetric polynomials involved as characters of $GL(V)$-modules constructed through polynomial functors going from the category of finite dimensional vector spaces to itself, then plethysm corresponds to functor compositions. Plethysm may also be naturally understood from the point of view of $\lambda$-ring theory. An important case for Representation Theory, Algebraic Geometry, and Geometric Complexity Theory, is the plethysm $s_\mu\circ s_\lambda$ of Schur polynomials. For instance, using the above rule, and well-known change of basis formulas for symmetric polynomials, one may calculate that $\large s_4\circ s_2= s_8+s_{62}+s_{44}+s_{422}+s_{2222},$ which corresponds exactly to the Schur functor decomposition: $\large S^4\circ S^2=S^8+S^{62}+S^{44}+S^{422}+S^{2222}.$ Recall that, for $V$ a vector space admitting $\{x_1,x_2,\ldots,x_d\}$ as a basis, then the vector space $S^d(V)$ affords as basis the set of degree $d$ "monomials" in the "variables" $x_i$. A. Lascoux, Symmetric Functions. Macdonald, I. G. Symmetric functions and Hall polynomials. Second edition. Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1995. x+475 pp. Algebraic Combinatorics and Coinvariant Spaces, CMS Treatise in Mathematics, CRC Press, 2009. 221 pages. (see the CRC Press website) (see Table of contents and Introduction).
CommonCrawl
A characterization of some alternating groups A p+8 of degree p + 8 by OD Shitian Liu1 & Zhanghua Zhang2 Let \(A_n\) be an alternating group of degree n. We know that \(A_{10}\) is 2-fold OD-characterizable and \(A_{125}\) is 6-fold OD-characterizable. In this note, we first show that \(A_{189}\) and \(A_{147}\) are 14-fold and 7-fold OD-characterizable, respectively, and second show that certain groups \(A_{p+8}\) with that \(\pi ((p+8)!)=\pi (p!)\) and \(p<1000\), are OD-characterizable. The first gives a negative answer to Open Problem of Kogani-Moghaddam and Moghaddamfar. For a group, it means finite, and for a simple group, it is non-abelian. If G is a group, then the set of element orders of G is denoted by \(\omega (G)\) and the set of prime divisors of G is denoted by \(\pi (G)\). Related to the set \(\omega (G)\) a graph is named a prime graph of G, which is written by GK(G). The vertex set of GK(G) is written by \(\pi (G)\), and for different primes p, q, there is an edge between the two vertices p, q if \(p\cdot q\in \omega (G)\), which is written by \(p\sim q\). We let s(G) denote the number of connected components of the prime graph GK(G). Moghaddamfar et al in 2005 gave the following notions which inspire some authors' attention. (Moghaddamfar et al. 2005) Let G be a finite group and \(|G|=p_{1}^{\alpha _{1}}p_{2}^{\alpha _{2}}\cdots p_{k}^{\alpha _{k}}\), where \(p_{i}\)s are primes and \(\alpha _{i}\)s are positive integers. For \(p\in \pi (G)\), let \(\deg (p):=|\{q\in \pi (G)|p\sim q\}|\), which we call the degree of p. We also define \(D(G):=(\deg (p_{1}),\deg (p_{2}),\ldots ,\deg (p_{k}))\), where \(p_{1}<p_{2}<\cdots <p_{k}\). We call D(G) the degree pattern of G. For a given finite group M, write \(h_{OD}(M)\) to denote the number of isomorphism classes of finite groups G such that (1) \(|G|=|M|\) and (2) \(D(G)=D(M)\). (Moghaddamfar et al. 2005) A finite group M is called k-fold OD-characterizable if \(h_{OD}(M)=k\). Moreover, a 1-fold OD-characterizable group is simply called an OD-characterizable group. Up to now, some groups are proved to be k-fold OD-characterizable and we can refer to the corresponding references of Akbari and Moghaddamfar (2015). Concerning the alternating group G with \(s(G)=1\), what's the influence of OD on the structure of group? Recently, the following results are given. The following statements hold: The alternating group \(A_{10}\) is 2-fold OD-characterizable (see Moghaddamfar and Zokayi 2010). The alternating group \(A_{125}\) is 6-fold OD-characterizable (see Liu and Zhang Submitted). The alternating group \(A_{p+3}\) except \(A_{10}\) is OD-characterizable (see Hoseini and Moghaddamfar 2010; Kogani-Moghaddam and Moghaddamfar 2012; Liu 2015; Moghaddamfar and Rahbariyan 2011; Moghaddamfar and Zokayi 2009; Yan and Chen 2012; Yan et al. 2013; Zhang and Shi 2008; Mahmoudifar and Khosravi 2015). All alternating groups \(A_{p+5}\), where \(p+4\) is a composite and \(p+6\) is a prime and \(5\ne p\in \pi (1000!)\), are OD-characterizable (see Yan et al. 2015). In Moghaddamfar (2015), \(A_{189}\) is at least 14-fold OD-characterizable. In this paper, we show the results as follows. The following hold: The alternating group \(A_{189}\) of degree 189 is 14-fold OD-characterizable. The alternating group \(A_{147}\) of degree 147 is 7-fold OD-characterizable. These results give negative answers to the Open Problem (Kogani-Moghaddam and Moghaddamfar 2012). Open Problem (Kogani-Moghaddam and Moghaddamfar 2012) All alternating groups \(A_m\), with \(m \ne 10\), are OD-characterizable. We also prove that some alternating groups \(A_{p+8}\) with \(p<1000\) are OD-characterizable. Assume that p is a prime satisfying the following three conditions: \(p\ne 139\) and \(p\ne 181\), \(\pi ((p+8)!)=\pi (p!)\), \(p\le 997\). Then the alternating group \(A_{p+8}\) of degree \(p+8\) is OD-characterizable. Let G be a finite group, then let \(\mathrm {Soc}(G)\) denote the socle of G regarded as a subgroup which is generated by the minimal normal subgroup of G. Let \(\mathrm {Syl}_{p}(G)\) be the set of all Sylow p-subgroups \(G_p\) of G, where \(p\in \pi (G)\). Let \(\mathrm {Aut}(G)\) and \(\mathrm {Out}(G)\) be the automorphism and outer-automorphism group of G, respectively. Let \(S_{n}\) denote the symmetric groups of degree n. Let p be a prime divisor of a positive integer n, then the p-part of n is denoted by \(n_p\), namely, \(n_p\Vert n\). The other symbols are standard (see Conway et al. 1985, for instance). Some preliminary results In this section, some preliminary results are given to prove the main theorem. Let \(S=P_1\times \cdots \times P_r\), where \(P_i\)'s are isomorphic non-abelian simple groups. Then \(\mathrm {Aut}(S)=\mathrm {Aut}(P_1)\times \cdots \times \mathrm {Aut}(P_r).S_r\). See Zavarnitsin (2000). \(\square\) Let \(A_{n}\) (or \(S_{n}\)) be an alternating (or a symmetric group) of degree n. Then the following hold. Let \(p,q\in \pi (A_{n})\) be odd primes. Then \(p\sim q\) if and only if \(p+q\le n\). Let \(p\in \pi (A_{n})\) be odd prime. Then \(2\sim p\) if and only if \(p+4\le n\). Let \(p,q\in \pi (S_{n})\). Then \(p\sim q\) if and only if \(p+q\le n\). It is easy to get from Zavarnitsin and Mazurov (1999). \(\square\) The number of groups of order 189 is 13. See Western (1898). \(\square\) Let P be a finite simple group and assume that r is the largest prime divisor of |P| with \(50< r<1000\). Then for every prime number s satisfying the inequality \((r - 1)/2 < s \le r\), the order of the factor group \(\mathrm {Aut}(P)/P\) is not divisible by s. It is easy to check this results by Conway et al. (1985) and Zavarnitsine (2009). \(\square\) Let \(n=p_1^{\alpha _1}p_2^{\alpha _2}\cdots p_r^{\alpha _r}\) where \(p_1,p_2,\ldots , p_r\) are different primes and \(\alpha _1,\alpha _2,\ldots ,\alpha _r\) are positive integers, then \(\exp (n,p_i)=\alpha _i\) with \(p_{i}^{\alpha _i}\mid n\) but \(p_i^{\alpha _i+1}\nmid n\). Lemma 10 Let \(L:=A_{p+8}\) be an alternating group of degree \(p+8\) with that p is a prime and \(\pi (p+8)!=\pi (p!)\). Let \(|\pi (A_{p+8})|=d\) with d a positive integer. Then the following hold: \(\deg (p)=4\) and \(\deg (r)=d-1\) for \(r\in \{2,3,5,7\}\). \(\exp (|L|,2)\le p+7\). \(\exp (|L|,r)=\sum \nolimits _{i=1}^{\infty }[\frac{p+8}{r^i}]\) for each \(r\in \pi (L)\backslash \{2\}\). Furthermore, \(\exp (|L|,r)<\frac{p+8}{2}\) where \(5\le r\in \pi (L)\). In particular, if \(r>[\frac{p+8}{2}]\), then \(\exp (|L|,r)=1\). By Lemma 7, it is easy to compute that for odd prime \(r, p\cdot r\in \omega (L)\) if and only if \(p+r\le p+8\). Hence \(r=3,5,7\). If \(r=2\), then since \(p+4\le p+8\), then \(2\cdot p\in \omega (L)\). This completes (1). By Gaussian's integer function, $$\begin{aligned} \exp (|L|,2)&= \sum \limits _{i=1}^{\infty }\left[ \frac{p+8}{2^i}\right] -1\\&= \left( \left[ \frac{p+8}{2}\right] +\frac{p+8}{2^2}+ \left[ \frac{p+8}{2^3}\right] +\cdots \right) -1\\&\le \left( \frac{p+8}{2}+\frac{p+8}{2^2}+\frac{p+8}{2^3}+\cdots \right) -1\\&= p+7. \end{aligned}$$ This proves (2). Similarly, we can get (3). \(\square\) Let a, m be positive integers. If \((a,m)=1\), then the equation \(a^x\equiv 1\)(mod m) has solutions. In particular, if the order of a modulo m is h(a), then h(a) divides \(\phi (m)\) where \(\phi (m)\) denotes the Euler's function of m. See Theorem 8.12 of Burton (2002).\(\square\) Let p be a prime and \(L:=A_{p+8}\) be the alternating group of degree \(p+8\) with that \(\pi ((p+8)!)=\pi (p!)\). Given \(P\in \mathrm {Syl}_p(L)\) and \(Q\in \mathrm {Syl}_q(L)\) with \(11\le q<p\le 1000\). Then the following results hold: The order of \(N_L(P)\) is not divisible by \(q^{s(q)}\), where \(s(q)=\exp (|L|,q)\). If \(p\in \{113, 139, 199, 211, 241, 283, 293, 337, 467, 509, 619, 787, 797, 839, 863, 887, 953, 997\}\), then \(|N_L(Q)|\) is not divisible by p. If \(p\in \{181, 317, 409, 421, 523, 547, 577, 631, 661, 691, 709, 811, 829, 919\}\), then there is at least a prime r with that the order of r modulo p is less than \(p-1\), where \(11\le r<p\) and \(r\in \pi (p!)\). By Lemma 11, the equation \(q^x\equiv 1\)(mod p) has solutions. Suppose the order of q modulo p is written by h(q). If \(h(q)=p-1\), then q is a primitive root of modulo p. By Lemma 11, we have \(h(q)\mid p-1\). By Lemma 10, we can get s(q). If \(h(q)>s(q)\), then \(q^{h(q)}\mid |L|\), a contradiction to the hypotheses. Then we can assume that \(h(q)\le s(q)\). We can get the q and h(q) by GAP (2016) as Table 1 (Note that there is certain prime which has order \(h(q)<p-1\), but \(h(q)>s(q)\). Hence we do not list in this table). By NC Theorem, the factor group \(\frac{N_L(P)}{C_L(P)}\) is isomorphic to a subgroup of \(\mathrm {Aut}(P)\cong \mathbb {Z}_{p-1}\) where \(\mathbb {Z}_n\) is a cyclic group of order n. It follows that the order of \(\frac{N_L(P)}{C_L(P)}\) is less than or equal to \(p-1\). If \(11\le q<p\) and \(q^{s(q)}\mid |N_L(P)|\) where \(\exp (|L|,q)=s(q)\), then \(q\mid |C_L(P)|\). This forces \(q\sim p\), a contradiction. This ends the proof of (1). Next, assume that \(p\in \{ 113, 139, 199, 211, 241, 283, 293, 337, 467, 509, 619, 787, 797, 839, 863, 887, 953, 997\}\). If p divides the order of \(N_L(Q)\), then by NC theorem and Table 1, \(p\mid |C_L(Q)|\) and so \(p\sim q\), a contradiction. This proves (2). (3) follows from Table 1. This completes the proof of Lemma 12. \(\square\) Table 1 The values of p and h(q) Proof of the main theorem In this section, we first give the proof of Theorem 4 and second prove Theorem 5. The proof of Theorem 4 We divides the proof into two steps. Step 1 Let \(M=A_{189}\). Assume that G is a finite group such that $$|G|=|M|$$ $$D(G)=D(M).$$ By Lemma 7, the degree pattern GK(G) of G is connected, in particular, the degree pattern GK(G) is the same as the degree pattern of GK(M). Let K be a maximal normal soluble subgroup of G. Then K is a \(\{2, 3, 5, 7\}\)-group, in particular, G is insoluble. Assume the contrary. First we show that K is a \(181'\)-group. We assume that K contains an element x of order 181. Let C be the centralizer of x in G and N be the normalizer of x in G. It is easy to see from D(G) that C is a \(\{2, 3, 5, 7, 181\}\)-group. By NC theorem, N/C is isomorphic to a subgroup of automorphism group \(\mathrm {Aut}(\langle x\rangle)\cong \mathbb {Z}_{2^2}\times \mathbb {Z}_{3^2}\times \mathbb {Z}_5\), where \(\mathbb {Z}_n\) is a cyclic group of order n. Hence, C is a \(\{2, 3, 5, 7, 181\}\)-group. By Frattini's arguments, \(G=KN_G(\langle x\rangle)\) and so \(\{11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181\}\subseteq \pi (K)\). Since K is soluble, G has a Hall subgroup H of order \(109\cdot 181\). Obviously, \(109\nmid 181-1, H\) is cyclic and so \(109\cdot 181\in \omega (G)\) contradicting \(D(G)=D(M)\). Second, show that K is a \(p'\)-group, where \(p\in \{11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179\}\). Let p be a prime divisor of |K| and P a Sylow p-subgroup of K. By Frattini's arguments, \(G=KN_G(P)\). It follows from Lemma 12, that 181 is a divisor of \(|N_G(P)|\) if and only if \(p=19\). If \(181\nmid |\mathrm {Aut}(P)|\), then 181 divides the order of \(C_G(P)\) and so there is an element of order \(p\cdot 181\), a contradiction. On the other hand, \(p=19\) and \(181\mid |\mathrm {Aut}(P)|\), where P is the Sylow 19-subgroup of K. By Lemma 10, \(\exp (|L|,19)=9\) and so \(|\frac{N_G(P)}{C_G(P)}|\mid \prod \nolimits _{i=1}^{9}19^{45}\cdot (19^i-1)\). It is easy to get that \(101\nmid \mid \prod \nolimits _{i=1}^{9}19^{45}\cdot (19^i-1)\). If \(101\mid |N_G(P)|\), then 101 is a prime divisor of \(C_G(P)\). Set \(C=C_G(P)\) and \(C_{101}\in \mathrm {Syl}_{101}(C)\). Also \(\exp (|L|,101)=1\). By Frattini's argument, \(N=CN_N(C_{101})\) and so \(p\nmid |N_N(C_{101})|\). Thus \(181\mid |C|\) and so \(181\sim p\), a contradiction. So \(101\nmid |N_G(P)|\) and \(101\in \pi (K)\). Let \(K_{101}\in \mathrm {Syl}_{101}(K)\). Since \(G=KN_G(K_{101})\), 101 divides the order of \(N_G(K_{101})\), then \(101\nmid |K|\), a contradiction. Therefore K is a \(\{2,3,5,7\}\)-group. Obviously, \(G\ne K\) and so G is insoluble. \(\square\) The quotient group G / K is an almost simple group. More precisely, there is a normal series such that \(S\le G/K\le \mathrm {Aut}(S)\), where S is isomorphic to \(A_{n}\) for \(n\in \{181, 182, 183, 184, 185, 186, 187, 188, 189\}\). Let \(H=G/K\) and \(S=\mathrm {Soc}(H)\). Then \(S=B_1\times \cdots \times B_n\), where \(B_i\)'s are non-abelian simple groups and \(S\le H\le \mathrm {Aut}(S)\). In what follows, we will prove that \(n=1\) and \(S\cong A_{n}\). Suppose the contrary. Obviously, 181 does not divide the order of S, otherwise, there is an element of order \(109\cdot 181\) contradicting \(D(G)=D(A_{189})\). Hence, for every i, we have that \(B_i\in \mathfrak {F}_{179}\), where \(\mathfrak {F}_p\) is the set of non-abelian simple group S with that \(p\in \pi (S)\subseteq \{2,3,\cdots ,p\}\) and p is a prime. But by Lemma 13, K is a \(\{2,3,5,7\}\)-group. Therefore \(181\in \pi (H)\subseteq \pi (\mathrm {Aut}(S))\) and so 181 divides the order of \(\mathrm {Out}(S)\). By Lemma 6, \(\mathrm {Out}(S)=\mathrm {Out}(P_1)\times \cdots \times \mathrm {Out}(P_r)\), where the group \(P_i\)'s are satisfying \(S\cong P_1\times \cdots \times P_r\). Therefore for some j 181 divides the order of an outer-automorphism group of a direct \(P_j\) of t isomorphic simple group \(B_i\). Since \(B_i\in \mathfrak {F}_{179}\), the order of \(\mathrm {Out}(B_i)\) is not divisible by 181 by Lemma 9. By Lemma 6, \(|\mathrm {Aut}(P_j)|=|\mathrm {Aut}(P_j)|^t\cdot t!\). It means \(t\ge 181\), and hence \(4^{181}\mid |G|\), a contradiction. Thus \(n=1\) and \(S=B_1\). By Lemma 13, we can assume that \(|S|=2^a\cdot 3^b\cdot 5^c\cdot 7^{d}\cdot 11^{18}\cdot 13^{15}\cdot 17^{11}\cdot 19^9\cdot 23^8\cdot 29^6\cdot 31^6\cdot 37^5\cdot 41^4\cdot 43^4\cdot 47^4\cdot 53^4\cdot 59^3\cdot 61^3\cdot 67^2\cdot 71^2\cdot 73^2\cdot 79^2\cdot 83^2\cdot 89^2\cdot 97\cdot 101\cdot 107\cdot 109\cdot 113\cdot 127\cdot 131\cdot 137\cdot 139\cdot 149\cdot 151\cdot 157\cdot 163\cdot 167\cdot 173\cdot 179\cdot 181\), where \(2\le a \le 182, 1\le b\le 93, 1\le c\le 45\) and \(1\le d\le 30\). By Zavarnitsine (2009), the only possible group is isomorphic to \(A_n\) with \(n\in \{181, 182, \ldots , 189\}\). This completes the proof. \(\square\) We continue the proof of Theorem 4. By Lemma 14, S is isomorphic to \(A_{n}\) with \(n\in \{181, 182, \cdots , 189\}\), and \(S\le G/K\le \mathrm {Aut}(S)\). Let \(S\cong A_{181}\). Then \(A_{181}\le G/K\le S_{181}\). If \(G/K\cong A_{181}\), then \(|K|=182\cdot 183\cdot 184\cdot 185\cdot 186\cdot 187\cdot 188\cdot 189=2^6\cdot 3^5\cdot 5\cdot 7^2\cdot 11\cdot 13\cdot 17\cdot 23\cdot 31\cdot 37\cdot 47\) and so \(11,13,17,23,31,37,47\in \pi (K)\) contradicting to Lemma 13. If \(G/K\cong S_{181}\), we also have that 11, 13, 17 or 19 divides |K|, contradicting to Lemma 13. Similarly we can rule out these cases "\(S\cong A_{n}\) with \(n\in \{182, 183, \cdots , 187\}\)". Then \(A_{188}\le G/K\le S_{188}\). Therefore \(G/K\cong A_{188}\) or \(G/K\cong S_{188}\). Let \(G/K\cong A_{188}\). Then \(|K|=7\cdot 3^3\). By Conway et al. (1985), the order of \(\mathrm {Out}(A_{188})\) is 2 and the Schur multiplier of \(A_{188}\) is 2. Then G is isomorphic to \(K\times A_{188}\). By Lemma 8, there are 13 types of groups of order 189 satisfying that \(|G|=|M|\) and \(D(G)=D(M)\). Let \(G/K\cong S_{188}\). Since \(|S_{188}|_2=|S_{189}|_2>|A_{189}|_2\), then we rule out this case. Then \(A_{189}\le G/K\le S_{189}\). If \(G/K\cong A_{189}\), then order consideration implies that G is isomorphic to \(A_{189}\). If \(G/K\cong S_{189}\), then as \(|S_{189}|_2>|A_{189}|_2=|G|_2\), we rule out this case. Step 2 Similarly as the proof of (1), the following results are given: K is a maximal soluble normal \(\{2,3,5,7\}\)-group. \(S\le G/K\le \mathrm {Aut}(S)\), where S is isomorphic to one of the groups: \(A_{139}, A_{140}, \ldots , A_{146}\) and \(A_{147}\). Then \(A_{139}\le G/K\le S_{139}\). If the former, then \(11\mid |K|\), a contradiction. If the latter, we also have that \(11\mid |K|\) and so we rule out. Similarly we can rule out these cases "S is isomorphic to \(A_{140}, A_{141}, \ldots , A_{145}\)". Then \(A_{146}\le G/K\le S_{146}\). If \(G/K\cong A_{146}\), then \(|K|=3\cdot 7^2\). Since the order of \(\mathrm {Out}(A_{147})\) is 2 and the Schur multiplier of \(A_{147}\) is 2. Then G is isomorphic to \(K\times A_{146}\). By GAP (2016), there are six types of groups of order 147. So there are 6 groups with the hypotheses: \(|G|=|A_{147}|\) and \(D(G)=D(A_{147})\). If \(G/K\cong S_{147}\), then as \(|S_{146}|_2>|A_{146}|_2=|A_{147}|_2=|G|_2\), we rule out. Then \(A_{147}\le G/K\le S_{147}\). If the former, then \(K=1\) and so \(G\cong A_{147}\), the desired result. If the latter, then as \(|S_{147}|_2>|A_{147}|_2=|G|_2\), we rule out. We also can get that \(A_{147}\) is 7-fold OD-characterizable. This completes the proof of Theorem 4. \(\square\) Assume that \(|G|=|A_{p+8}|\) and \(D(G)=D(A_{p+8})\), then by Lemma 7, the degree pattern GK(G) of G is the same as \(GK(A_{p+8})\) of \(A_{p+8}\). Similarly as the proof of Theorem 4, the statements are gotten: Let K be a maximal soluble group. Then K is a \(\{2, 3, 5, 7\}\)-group, in particular, G is insoluble. There is a normal series such that \(S\le G/K\le \mathrm {Aut}(S)\), where S is isomorphic to \(A_{p+r}\) with that \(0\le r\le 8\) and \(p\in \{\)113, 139, 199, 211, 241, 283, 293, 317, 337, 409, 421, 467, 509, 523, 547, 577, 619, 631, 661, 691, 709, 787, 797, 811, 829, 839, 863, 887, 919, 953, 997\(\}\). In what follows, we consider the case "\(p=113\)". \(S\cong A_{113}\). Then \(A_{113}\le G/K\le S_{113}\). If \(G/K\cong A_{113}\), then 11 divides the order of K, a contradiction. If \(G/K\cong S_{113}\), then we also have that \(11\mid |K|\), a contradiction. Similarly we can get a contradiction when S is isomorphic to one of \(A_{114}, A_{115}, A_{116}, A_{117}, A_{118}, A_{119}\), and \(A_{120}\). Then \(A_{121}\le G/K\cong S_{121}\). If \(G/K\cong A_{121}\), then \(K=1\), the desired result. If \(G/K\cong S_{121}\), then as \(|S_{121}|_2>|G|_2=|A_{121}|_2\), a contradiction. Similarly we can deal with these cases "\(p\in \{139, 199, 211, 241, 283, 293, 317, 337, 409, 421, 467, 509, 523, 547, 577, 619, 631, 661, 691, 709, 787, 797, 811, 829, 839, 863, 887, 919, 953, 997\}\)". Non OD-characterization of some alternating groups Assume that p is a prime and m is an integer larger than 3. If \(\pi ((p+m)!)\subseteq \pi (p!)\), then \(GK(A_{p+m})\) is connected. For the alternating group \(A_{p+m}, |A_{p+m}|=(p+m)|A_{p+m-1}|\). We shall use the notation v(n) to denote the number of types of groups of order n where n is a positive integer. We follows the method of Moghaddamfar (2015), \(h_{OD}(A_{p+m})\ge 1+v(p+m)\) where \(\pi (A_{p+m})=\pi (A_p)\) and \(m\ge 1\) is a non-prime integer. We get the results as Table 2 which contains some results of Liu and Zhang (Submitted), Moghaddamfar (2015), Mahmoufifar and Khosravi (2014). Table 2 Non OD-characterizability of alternating groups Note that v(n), the number of groups of given small order n can be computed by GAP (2016). The Gap programme is as followings. gap> SmallGroupsInformation(n); So we have the following conjecture. Conjecture Assume that p is a prime and \(m\ge 6\) is not a prime. If \(\pi ((p+m)!)\subseteq \pi (p!)\) and \(\pi (p+m)\subseteq \pi (m!)\), then \(A_{p+m}\) is not OD-characterizable. In this paper, we have proved the following two results. Result 1a: The alternating group \(A_{189}\) of degree 189 is 14-fold OD-characterizable. Result 1b: The alternating group \(A_{147}\) of degree 147 is 7-fold OD-characterizable. Result 2: Let p be a prime with the following three conditions: Akbari B, Moghaddamfar AR (2015) OD-characterization of certain four dimensional linear groups with related results concerning degree patterns. Front Math China 10(1):1–31. doi:10.1007/s11464-014-0430-2 Burton DM (2002) Elementary number theory, 5th edn. McGraw-Hill Companies Inc., New York Conway JH, Curtis RT, Norton SP, Parker RA, Wilson RA (1985) Atlas of finite groups. Maximal subgroups and ordinary characters for simple groups. With computational assistance from J. G. Thackray. Oxford University Press, Eynsham Hoseini AA, Moghaddamfar AR (2010) Recognizing alternating groups \(A_{p}+3\) for certain primes p by their orders and degree patterns. Front Math China 5(3):541–553. doi:10.1007/s11464-010-0011-y Kogani-Moghaddam R, Moghaddamfar AR (2012) Groups with the same order and degree pattern. Sci China Math 55(4):701–720. doi:10.1007/s11425-011-4314-6 Liu S (2015) OD-characterization of some alternating groups. Turk J Math 39(3):395–407. doi:10.3906/mat-1407-53 Liu S, Zhang Z (Submitted) A characterization of \(A_125\) by OD Mahmoufifar A, Khosravi B (2014) The answers to a problem and two conjectures about OD-characterization of finite groups. arXiv:1409.7903v1 Mahmoudifar A, Khosravi B (2015) Characterization of finite simple group \(A_{p}+3\) by its order and degree pattern. Publ Math Debr 86(1–2):19–30. doi:10.5486/PMD.2015.5916 Moghaddamfar AR (2015) On alternating and symmetric groups which are quasi OD-characterizable. J Algebra Appl 16(2):1750065. doi:10.1142/S0219498817500657 Moghaddamfar AR, Zokayi AR (2009) OD-characterization of alternating and symmetric groups of degrees 16 and 22. Front Math China 4(4):669–680. doi:10.1007/s11464-009-0037-1 Moghaddamfar AR, Zokayi AR (2010) OD-characterization of certain finite groups having connected prime graphs. Algebra Colloq 17(1):121–130. doi:10.1142/S1005386710000143 Moghaddamfar AR, Rahbariyan S (2011) More on the OD-characterizability of a finite group. Algebra Colloq 18(4):663–674. doi:10.1142/S1005386711000514 Moghaddamfar AR, Zokayi AR, Darafsheh MR (2005) A characterization of finite simple groups by the degrees of vertices of their prime graphs. Algebra Colloq 12(3):431–442. doi:10.1142/S1005386705000398 The GAP Group (2016) GAP-Groups, algorithms, and programming, version 4.8.4. The GAP Group. http://www.gap-system.org Western AE (1898) Groups of order \(p^{3}q\). Proc Lond Math Soc S1–30(1):209–263. doi:10.1112/plms/s1-30.1.209 Yan Y, Chen G, Zhang L, Xu H (2013) Recognizing finite groups through order and degree patterns. Chin Ann Math Ser B 34(5):777–790. doi:10.1007/s11401-013-0787-7 Yan Y, Xu H, Chen G (2015) OD-Characterization of alternating and symmetric groups of degree p + 5. Chin Ann Math Ser B 36(6):1001–1010. doi:10.1007/s11401-015-0923-7 Yan Y, Chen G (2012) OD-characterization of alternating and symmetric groups of degree 106 and 112. In: Proceedings of the international conference on algebra 2010. World Scientific Publisher, Hackensack, pp 690–696. doi:10.1142/9789814366311 Zavarnitsin AV (2000) Recognition of alternating groups of degrees r +1 and r + 2 for prime r and of a group of degree 16 by the set of their element orders. Algebra Log 39(6):648–661754. doi:10.1023/A:1010218618414 Zavarnitsine AV (2009) Finite simple groups with narrow prime spectrum. Sib Elektron Mat Izv 6:1–12 Zavarnitsin AV, Mazurov VD (1999) Element orders in coverings of symmetric and alternating groups. Algebra Log 38(3):296–315378. doi:10.1007/BF02671740 Zhang LC, Shi WJ (2008) OD-characterization of \(A_{16}\). J Suzhou Univ (Nat Sci Ed) 24(2):7–10 SL and ZZ contributed this paper equally. Both authors read and approved the final manuscript. The first author was supported by the Opening Project of Sichuan Province University Key Laborstory of Bridge Non-destruction Detecting and Engineering Computing (Grant Nos: 2013QYJ02 and 2014QYJ04); the Scientific Research Project of Sichuan University of Science and Engineering (Grant No: 2014RC02) and by the department of Sichuan Province Eduction(Grant Nos: 15ZA0235 and 16ZA0256). The authors are very grateful for the helpful suggestions of the referee. Dedicated to Prof Gui Min Wei on the occasion of his 70th birthday. School of Science, Sichuan University of Science and Engineering, Xueyuan Street, Zigong, 643000, Sichuan, People's Republic of China Shitian Liu Sichuan Water Conservancy Vocational College, Chongzhou, Chengdu, 643000, Sichuan, People's Republic of China Zhanghua Zhang Search for Shitian Liu in: Search for Zhanghua Zhang in: Correspondence to Shitian Liu. Shitian Liu and Zhanghua Zhang are contributed equally to this work. Liu, S., Zhang, Z. A characterization of some alternating groups A p+8 of degree p + 8 by OD. SpringerPlus 5, 1128 (2016) doi:10.1186/s40064-016-2763-7 Element order Alternating group Simple group Symmetric group Degree pattern Prime graph Mathematics Subject Classification 20D05 Mathematics (Theoretical)
CommonCrawl
For other uses, see Risk (disambiguation). Risk is the possibility of losing something of value. Values (such as physical health, social status, emotional well-being, or financial wealth) can be gained or lost when taking risk resulting from a given action or inaction, foreseen or unforeseen (planned or not planned). Risk can also be defined as the intentional interaction with uncertainty.[1] Uncertainty is a potential, unpredictable, and uncontrollable outcome; risk is an aspect of action taken in spite of uncertainty. Risk perception is the subjective judgment people make about the severity and probability of a risk, and may vary person to person. Any human endeavour carries some risk, but some are much riskier than others.[2] Firefighters at work The Oxford English Dictionary cites the earliest use of the word in English (in the spelling of risque from its from French original, 'risque' ) as of 1621, and the spelling as risk from 1655. It defines risk as: (Exposure to) the possibility of loss, injury, or other adverse or unwelcome circumstance; a chance or situation involving such a possibility.[3] Risk is an influence affecting strategy caused by an incentive or condition that inhibits transformation to quality excellence.[4] Risk is an uncertain event or condition that, if it occurs, has an effect on at least one [project] objective. (This definition, using project terminology, is easily made universal by removing references to projects).[5] The probability of something happening multiplied by the resulting cost or benefit if it does. (This concept is more properly known as the 'Expectation Value' or 'Risk Factor' and is used to compare levels of risk) The probability or threat of quantifiable damage, injury, liability, loss, or any other negative occurrence that is caused by external or internal vulnerabilities, and that may be avoided through preemptive action. Finance: The possibility that an actual return on an investment will be lower than the expected return. Insurance: A situation where the probability of a variable (such as burning down of a building) is known but when a mode of occurrence or the actual value of the occurrence (whether the fire will occur at a particular property) is not.[6] A risk is not an uncertainty (where neither the probability nor the mode of occurrence is known), a peril (cause of loss), or a hazard (something that makes the occurrence of a peril more likely or more severe). Securities trading: The probability of a loss or drop in value. Trading risk is divided into two general categories: (1) Systematic risk affects all securities in the same class and is linked to the overall capital-market system and therefore cannot be eliminated by diversification. Also called market risk. (2) Non-systematic risk is any risk that isn't market-related. Also called non-market risk, extra-market risk or diversifiable risk. Workplace: Product of the consequence and probability of a hazardous event or phenomenon. For example, the risk of developing cancer is estimated as the incremental probability of developing cancer over a lifetime as a result of exposure to potential carcinogens (cancer-causing substances). International Organization for StandardizationEdit The International Organization for Standardization publication ISO 31000 (2009) / ISO Guide 73:2002 definition of risk is the 'effect of uncertainty on objectives'. In this definition, uncertainties include events (which may or may not happen) and uncertainties caused by ambiguity or a lack of information. It also includes both negative and positive impacts on objectives. Many definitions of risk exist in common usage, however this definition was developed by an international committee representing over 30 countries and is based on the input of several thousand subject matter experts. Very different approaches to risk management are taken in different fields, e.g. "Risk is the unwanted subset of a set of uncertain outcomes" (Cornelius Keating). Risk can be seen as relating to the probability of uncertain future events.[7] For example, according to Factor Analysis of Information Risk, risk is:[7] the probable frequency and probable magnitude of future loss. In computer science this definition is used by The Open Group.[8] OHSAS (Occupational Health & Safety Advisory Services) defines risk as the combination of the probability of a hazard resulting in an adverse event, and the severity of the event.[9] In information security risk is defined as "the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause harm to the organization".[10] Financial risk is often defined as the unpredictable variability or volatility of returns, and this would include both potential better-than-expected and worse-than-expected returns. References to negative risk below should be read as also applying to positive impacts or opportunity (e.g. for "loss" read "loss or gain") unless the context precludes this interpretation. The related terms "threat" and "hazard" are often used to mean something that could cause harm. Practice areasEdit Risk is ubiquitous in all areas of life and risk management is something that we all must do, whether we are managing a major organisation or simply crossing the road. When describing risk however, it is convenient to consider that risk practitioners operate in some specific practice areas. Economic riskEdit Economic risks can be manifested in lower incomes or higher expenditures than expected. The causes can be many, for instance, the hike in the price for raw materials, the lapsing of deadlines for construction of a new operating facility, disruptions in a production process, emergence of a serious competitor on the market, the loss of key personnel, the change of a political regime, or natural disasters. HealthEdit Risks in personal health may be reduced by primary prevention actions that decrease early causes of illness or by secondary prevention actions after a person has clearly measured clinical signs or symptoms recognised as risk factors. Tertiary prevention reduces the negative impact of an already established disease by restoring function and reducing disease-related complications. Ethical medical practice requires careful discussion of risk factors with individual patients to obtain informed consent for secondary and tertiary prevention efforts, whereas public health efforts in primary prevention require education of the entire population at risk. In each case, careful communication about risk factors, likely outcomes and certainty must distinguish between causal events that must be decreased and associated events that may be merely consequences rather than causes. In epidemiology, the lifetime risk of an effect is the cumulative incidence, also called incidence proportion over an entire lifetime.[11] Health, safety, and environmentEdit In terms of occupational health & safety management, the term 'risk' may be defined as the most likely consequence of a hazard, combined with the likelihood or probability of it occurring. Health, safety, and environment (HSE) are separate practice areas; however, they are often linked. The reason for this is typically to do with organizational management structures; however, there are strong links among these disciplines. One of the strongest links between these is that a single risk event may have impacts in all three areas, albeit over differing timescales. For example, the uncontrolled release of radiation or a toxic chemical may have immediate short-term safety consequences, more protracted health impacts, and much longer-term environmental impacts. Events such as Chernobyl, for example, caused immediate deaths, and in the longer term, deaths from cancers, and left a lasting environmental impact leading to birth defects, impacts on wildlife, etc. Over time, a form of risk analysis called environmental risk analysis has developed. Environmental risk analysis is a field of study that attempts to understand events and activities that bring risk to human health or the environment.[12] Human health and environmental risk is the likelihood of an adverse outcome (See adverse outcome pathway). As such, risk is a function of hazard and exposure. Hazard is the intrinsic danger or harm that is posed, e.g. the toxicity of a chemical compound. Exposure is the likely contact with that hazard. Therefore, the risk of even a very hazardous substance approaches zero as the exposure nears zero, given a person's (or other organism's) biological makeup, activities and location (See exposome).[13] Another example of health risks are when certain behaviours, such as risky sexual behaviours, increase the likelihood of contracting HIV.[14] Information technology and information securityEdit Main article: IT risk Information technology risk, or IT risk, IT-related risk, is a risk related to information technology. This relatively new term was developed as a result of an increasing awareness that information security is simply one facet of a multitude of risks that are relevant to IT and the real world processes it supports. The increasing dependencies of modern society on information and computers networks (both in private and public sectors, including military)[15][16][17] has led to new terms like IT risk and Cyberwarfare. Main articles: Information assurance and Information security Information security means protecting information and information systems from unauthorised access, use, disclosure, disruption, modification, perusal, inspection, recording or destruction.[18] Information security grew out of practices and procedures of computer security. Information security has grown to information assurance (IA) i.e. is the practice of managing risks related to the use, processing, storage, and transmission of information or data and the systems and processes used for those purposes. While focused dominantly on information in digital form, the full range of IA encompasses not only digital but also analogue or physical form. Information assurance is interdisciplinary and draws from multiple fields, including accounting, fraud examination, forensic science, management science, systems engineering, security engineering, and criminology, in addition to computer science. So, IT risk is narrowly focused on computer security, while information security extends to risks related to other forms of information (paper, microfilm). Information assurance risks include the ones related to the consistency of the business information stored in IT systems and the information stored by other means and the relevant business consequences. InsuranceEdit Insurance is a risk treatment option which involves risk sharing. It can be considered as a form of contingent capital and is akin to purchasing an option in which the buyer pays a small premium to be protected from a potential large loss. Insurance risk is often taken by insurance companies, who then bear a pool of risks including market risk, credit risk, operational risk, interest rate risk, mortality risk, longevity risks, etc.[19] Business and managementEdit Means of assessing risk vary widely between professions. Indeed, they may define these professions; for example, a doctor manages medical risk, while a civil engineer manages risk of structural failure. A professional code of ethics is usually focused on risk assessment and mitigation (by the professional on behalf of client, public, society or life in general). In the workplace, incidental and inherent risks exist. Incidental risks are those that occur naturally in the business but are not part of the core of the business. Inherent risks have a negative effect on the operating profit of the business. In human servicesEdit The experience of many people who rely on human services for support is that 'risk' is often used as a reason to prevent them from gaining further independence or fully accessing the community, and that these services are often unnecessarily risk averse.[20] "People's autonomy used to be compromised by institution walls, now it's too often our risk management practices", according to John O'Brien.[21] Michael Fischer and Ewan Ferlie (2013) find that contradictions between formal risk controls and the role of subjective factors in human services (such as the role of emotions and ideology) can undermine service values, so producing tensions and even intractable and 'heated' conflict.[22] High reliability organisations (HROs)Edit A high reliability organisation (HRO) is an organisation that has succeeded in avoiding catastrophes in an environment where normal accidents can be expected due to risk factors and complexity. Most studies of HROs involve areas such as nuclear aircraft carriers, air traffic control, aerospace and nuclear power stations. Organizations such as these share in common the ability to consistently operate safely in complex, interconnected environments where a single failure in one component could lead to catastrophe. Essentially, they are organisations which appear to operate 'in spite' of an enormous range of risks. Some of these industries manage risk in a highly quantified and enumerated way. These include the nuclear power and aircraft industries, where the possible failure of a complex series of engineered systems could result in highly undesirable outcomes. The usual measure of risk for a class of events is then: R = probability of the event × the severity of the consequence. The total risk is then the sum of the individual class-risks; see below.[23] In the nuclear industry, consequence is often measured in terms of off-site radiological release, and this is often banded into five or six-decade-wide bands.[clarification needed] The risks are evaluated using fault tree/event tree techniques (see safety engineering). Where these risks are low, they are normally considered to be "broadly acceptable". A higher level of risk (typically up to 10 to 100 times what is considered broadly acceptable) has to be justified against the costs of reducing it further and the possible benefits that make it tolerable—these risks are described as "Tolerable if ALARP", where ALARP stands for "as low as reasonably practicable". Risks beyond this level are classified as "intolerable". The level of risk deemed broadly acceptable has been considered by regulatory bodies in various countries—an early attempt by UK government regulator and academic F. R. Farmer used the example of hill-walking and similar activities, which have definable risks that people appear to find acceptable. This resulted in the so-called Farmer Curve of acceptable probability of an event versus its consequence. The technique as a whole is usually referred to as probabilistic risk assessment (PRA) (or probabilistic safety assessment, PSA). See WASH-1400 for an example of this approach. FinanceEdit Main article: Financial risk In finance, risk is the chance that the return achieved on an investment will be different from that expected, and also takes into account the size of the difference. This includes the possibility of losing some or all of the original investment. In a view advocated by Damodaran, risk includes not only "downside risk" but also "upside risk" (returns that exceed expectations).[24] Some regard the standard deviation of the historical returns or average returns of a specific investment as providing some historical measure of risk; see modern portfolio theory. Financial risk may be market-dependent, determined by numerous market factors, or operational, resulting from fraudulent behaviour (e.g. Bernard Madoff). A fundamental idea in finance is the relationship between risk and return (see modern portfolio theory). The greater the potential return one might seek, the greater the risk that one generally assumes. A free market reflects this principle in the pricing of an instrument: strong demand for a safer instrument drives its price higher (and its return correspondingly lower) while weak demand for a riskier instrument drives its price lower (and its potential return thereby higher). For example, a US Treasury bond is considered to be one of the safest investments. In comparison to an investment or speculative grade corporate bond, US Treasury notes and bonds yield lower rates of return. The reason for this is that a corporation is more likely to default on debt than the US government. Because the risk of investing in a corporate bond is higher, investors are offered a correspondingly higher rate of return. A popular risk measure is Value-at-Risk (VaR). There are different types of VaR: long term VaR, marginal VaR, factor VaR and shock VaR. The latter is used in measuring risk during the extreme market stress conditions. In finance, risk has no single definition. Artzner et al.[25] write "we call risk the investor's future net worth". In Novak[26] "risk is a possibility of an undesirable event". In financial markets, one may need to measure credit risk, information timing and source risk, probability model risk, operational risk and legal risk if there are regulatory or civil actions taken as a result of "investor's regret". With the advent of automation in financial markets, the concept of "real-time risk" has gained a lot of attention. Aldridge and Krawciw[27] define real-time risk as the probability of instantaneous or near-instantaneous loss, and can be due to flash crashes, other market crises, malicious activity by selected market participants and other events. A well-cited example[28] of real-time risk was a US $440 million loss incurred within 30 minutes by Knight Capital Group (KCG) on 1 August 2012; the culprit was a poorly-tested runaway algorithm deployed by the firm. Regulators have taken notice of real-time risk as well. Basel III[29] requires real-time risk management framework for bank stability. It is not always obvious if financial instruments are "hedging" (purchasing/selling a financial instrument specifically to reduce or cancel out the risk in another investment) or "speculation" (increasing measurable risk and exposing the investor to catastrophic loss in pursuit of very high windfalls that increase expected value). Some people may be "risk seeking", i.e. their utility function's second derivative is positive. Such an individual willingly pays a premium to assume risk (e.g. buys a lottery ticket). Financial auditingEdit Main article: Audit risk The financial audit risk model expresses the risk of an auditor providing an inappropriate opinion (or material misstatement) of a commercial entity's financial statements. It can be analytically expressed as AR = IR × CR × DR {\displaystyle {\text{AR}}={\text{IR}}\times {\text{CR}}\times {\text{DR}}} where AR is audit risk, IR is inherent risk, CR is control risk and DR is detection risk. Note: As defined, audit risk does not consider the impact of an auditor misstatement and so is stated as a simple probability. The impact of misstatement must be considered when determining an acceptable audit risk.[30] SecurityEdit AT YOUR OWN RISK Popular labelling Security risk management involves protection of assets from harm caused by deliberate acts. A more detailed definition is: "A security risk is any event that could result in the compromise of organizational assets i.e. the unauthorized use, loss, damage, disclosure or modification of organizational assets for the profit, personal interest or political interests of individuals, groups or other entities constitutes a compromise of the asset, and includes the risk of harm to people. Compromise of organizational assets may adversely affect the enterprise, its business units and their clients. As such, consideration of security risk is a vital component of risk management."[31] Human factorsEdit Main articles: Decision theory and Prospect theory One of the growing areas of focus in risk management is the field of human factors where behavioural and organizational psychology underpin our understanding of risk based decision making. This field considers questions such as "how do we make risk based decisions?", "why are we irrationally more scared of sharks and terrorists than we are of motor vehicles and medications?" In decision theory, regret (and anticipation of regret) can play a significant part in decision-making, distinct from risk aversion[32][33](preferring the status quo in case one becomes worse off). Framing[34] is a fundamental problem with all forms of risk assessment. In particular, because of bounded rationality (our brains get overloaded, so we take mental shortcuts), the risk of extreme events is discounted because the probability is too low to evaluate intuitively. As an example, one of the leading causes of death is road accidents caused by drunk driving – partly because any given driver frames the problem by largely or totally ignoring the risk of a serious or fatal accident. For instance, an extremely disturbing event (an attack by hijacking, or moral hazards) may be ignored in analysis despite the fact it has occurred and has a nonzero probability. Or, an event that everyone agrees is inevitable may be ruled out of analysis due to greed or an unwillingness to admit that it is believed to be inevitable. These human tendencies for error and wishful thinking often affect even the most rigorous applications of the scientific method and are a major concern of the philosophy of science. All decision-making under uncertainty must consider cognitive bias, cultural bias, and notational bias: No group of people assessing risk is immune to "groupthink": acceptance of obviously wrong answers simply because it is socially painful to disagree, where there are conflicts of interest. Framing involves other information that affects the outcome of a risky decision. The right prefrontal cortex has been shown to take a more global perspective[35] while greater left prefrontal activity relates to local or focal processing.[36] From the Theory of Leaky Modules[37] McElroy and Seta proposed that they could predictably alter the framing effect by the selective manipulation of regional prefrontal activity with finger tapping or monaural listening.[38] The result was as expected. Rightward tapping or listening had the effect of narrowing attention such that the frame was ignored. This is a practical way of manipulating regional cortical activation to affect risky decisions, especially because directed tapping or listening is easily done. Psychology of risk takingEdit A growing area of research has been to examine various psychological aspects of risk taking. Researchers typically run randomised experiments with a treatment and control group to ascertain the effect of different psychological factors that may be associated with risk taking. Thus, positive and negative feedback about past risk taking can affect future risk taking. In an experiment, people who were led to believe they are very competent at decision making saw more opportunities in a risky choice and took more risks, while those led to believe they were not very competent saw more threats and took fewer risks.[39] MaintenanceEdit The concept of risk-based maintenance is an advanced form of Reliability centred maintenance. In case of chemical industries, apart from probability of failure, consequences of failure is also very important. Therefore, the selection of maintenance policies should be based on risk, instead of reliability. Risk-based maintenance methodology acts as a tool for maintenance planning and decision making to reduce the probability of failure and its consequences. In risk-based maintenance decision making, the maintenance resources can be used optimally based on the risk class (high, medium, or low) of equipment or machines, to achieve tolerable risk criteria.[40] Cybersecurity[41]Edit Closely related to information assurance and security risk, cybersecurity is the application of system security engineering[42] in order to address the compromise of company cyber-assets required for business or mission purposes. In order to address cyber-risk, cybersecurity applies security to the supply chain, the design and production environment for a product or service, and the product itself in order to provide efficient and appropriate security commensurate with the value of the asset to the mission or business process. Risk assessment and analysisEdit Main articles: Risk assessment and Operational risk management Since risk assessment and management is essential in security management, both are tightly related. Security assessment methodologies like CRAMM contain risk assessment modules as an important part of the first steps of the methodology. On the other hand, risk assessment methodologies like Mehari evolved to become security assessment methodologies. An ISO standard on risk management (Principles and guidelines on implementation) was published under code ISO 31000 on 13 November 2009. Quantitative analysisEdit There are many formal methods used to "measure" risk. Often the probability of a negative event is estimated by using the frequency of past similar events. Probabilities for rare failures may be difficult to estimate. This makes risk assessment difficult in hazardous industries, for example nuclear energy, where the frequency of failures is rare, while harmful consequences of failure are severe. Statistical methods may also require the use of a cost function, which in turn may require the calculation of the cost of loss of a human life. This is a difficult problem. One approach is to ask what people are willing to pay to insure against death[43] or radiological release (e.g. GBq of radio-iodine),[citation needed] but as the answers depend very strongly on the circumstances it is not clear that this approach is effective. Risk is often measured as the expected value of an undesirable outcome. This combines the probabilities of various possible events and some assessment of the corresponding harm into a single value. See also Expected utility. The simplest case is a binary possibility of Accident or No accident. The associated formula for calculating risk is then: R = ( probability of the accident occurring ) × ( expected loss in case of the accident ) {\displaystyle {\text{R}}=({\text{probability of the accident occurring}})\times ({\text{expected loss in case of the accident}})} For example, if performing activity X has a probability of 0.01 of suffering an accident of A, with a loss of 1000, then total risk is a loss of 10, the product of 0.01 and 1000. Situations are sometimes more complex than the simple binary possibility case. In a situation with several possible accidents, total risk is the sum of the risks for each different accident, provided that the outcomes are comparable: R = ∑ For all accidents ( probability of the accident occurring ) × ( expected loss in case of the accident ) {\displaystyle {\text{R}}=\sum _{\text{For all accidents}}({\text{probability of the accident occurring}})\times ({\text{expected loss in case of the accident}})} For example, if performing activity X has a probability of 0.01 of suffering an accident of A, with a loss of 1000, and a probability of 0.000001 of suffering an accident of type B, with a loss of 2,000,000, then total loss expectancy is 12, which is equal to a loss of 10 from an accident of type A and 2 from an accident of type B. One of the first major uses of this concept was for the planning of the Delta Works in 1953, a flood protection program in the Netherlands, with the aid of the mathematician David van Dantzig.[44] The kind of risk analysis pioneered there has become common today in fields like nuclear power, aerospace and the chemical industry. In statistical decision theory, the risk function is defined as the expected value of a given loss function as a function of the decision rule used to make decisions in the face of uncertainty. Fear as intuitive risk assessmentEdit People may rely on their fear and hesitation to keep them out of the most profoundly unknown circumstances. Fear is a response to perceived danger. Risk could be said to be the way we collectively measure and share this "true fear"—a fusion of rational doubt, irrational fear, and a set of unquantified biases from our own experience. The field of behavioural finance focuses on human risk-aversion, asymmetric regret, and other ways that human financial behaviour varies from what analysts call "rational". Risk in that case is the degree of uncertainty associated with a return on an asset. Recognizing and respecting the irrational influences on human decision making may do much to reduce disasters caused by naive risk assessments that presume rationality but in fact merely fuse many shared biases. Anxiety, risk and decision makingEdit Fear, anxiety and riskEdit According to one set of definitions, fear is a fleeting emotion ascribed to a particular object, while anxiety is a trait of fear (this is referring to "trait anxiety", as distinct from how the term "anxiety" is generally used) that lasts longer and is not attributed to a specific stimulus (these particular definitions are not used by all authors cited on this page).[45] Some studies show a link between anxious behaviour and risk (the chance that an outcome will have an unfavorable result).[46] Joseph Forgas introduced valence based research where emotions are grouped as either positive or negative (Lerner and Keltner, 2000). Positive emotions, such as happiness, are believed to have more optimistic risk assessments and negative emotions, such as anger, have pessimistic risk assessments. As an emotion with a negative valence, fear, and therefore anxiety, has long been associated with negative risk perceptions. Under the more recent appraisal tendency framework of Jennifer Lerner et al., which refutes Forgas' notion of valence and promotes the idea that specific emotions have distinctive influences on judgments, fear is still related to pessimistic expectations.[47] Psychologists have demonstrated that increases in anxiety and increases in risk perception are related and people who are habituated to anxiety experience this awareness of risk more intensely than normal individuals.[48] In decision-making, anxiety promotes the use of biases and quick thinking to evaluate risk. This is referred to as affect-as-information according to Clore, 1983. However, the accuracy of these risk perceptions when making choices is not known.[49] Consequences of anxietyEdit Experimental studies show that brief surges in anxiety are correlated with surges in general risk perception.[49] Anxiety exists when the presence of threat is perceived (Maner and Schmidt, 2006).[48] As risk perception increases, it stays related to the particular source impacting the mood change as opposed to spreading to unrelated risk factors.[49] This increased awareness of a threat is significantly more emphasised in people who are conditioned to anxiety.[50] For example, anxious individuals who are predisposed to generating reasons for negative results tend to exhibit pessimism.[50] Also, findings suggest that the perception of a lack of control and a lower inclination to participate in risky decision-making (across various behavioural circumstances) is associated with individuals experiencing relatively high levels of trait anxiety.[48] In the previous instance, there is supporting clinical research that links emotional evaluation (of control), the anxiety that is felt and the option of risk avoidance.[48] There are various views presented that anxious/fearful emotions cause people to access involuntary responses and judgments when making decisions that involve risk. Joshua A. Hemmerich et al. probes deeper into anxiety and its impact on choices by exploring "risk-as-feelings" which are quick, automatic, and natural reactions to danger that are based on emotions. This notion is supported by an experiment that engages physicians in a simulated perilous surgical procedure. It was demonstrated that a measurable amount of the participants' anxiety about patient outcomes was related to previous (experimentally created) regret and worry and ultimately caused the physicians to be led by their feelings over any information or guidelines provided during the mock surgery. Additionally, their emotional levels, adjusted along with the simulated patient status, suggest that anxiety level and the respective decision made are correlated with the type of bad outcome that was experienced in the earlier part of the experiment.[51] Similarly, another view of anxiety and decision-making is dispositional anxiety where emotional states, or moods, are cognitive and provide information about future pitfalls and rewards (Maner and Schmidt, 2006). When experiencing anxiety, individuals draw from personal judgments referred to as pessimistic outcome appraisals. These emotions promote biases for risk avoidance and promote risk tolerance in decision-making.[50] Dread riskEdit It is common for people to dread some risks but not others: They tend to be very afraid of epidemic diseases, nuclear power plant failures, and plane accidents but are relatively unconcerned about some highly frequent and deadly events, such as traffic crashes, household accidents, and medical errors. One key distinction of dreadful risks seems to be their potential for catastrophic consequences,[52] threatening to kill a large number of people within a short period of time.[53] For example, immediately after the 11 September attacks, many Americans were afraid to fly and took their car instead, a decision that led to a significant increase in the number of fatal crashes in the time period following the 9/11 event compared with the same time period before the attacks.[54][55] Different hypotheses have been proposed to explain why people fear dread risks. First, the psychometric paradigm[52] suggests that high lack of control, high catastrophic potential, and severe consequences account for the increased risk perception and anxiety associated with dread risks. Second, because people estimate the frequency of a risk by recalling instances of its occurrence from their social circle or the media, they may overvalue relatively rare but dramatic risks because of their overpresence and undervalue frequent, less dramatic risks.[55] Third, according to the preparedness hypothesis, people are prone to fear events that have been particularly threatening to survival in human evolutionary history.[56] Given that in most of human evolutionary history people lived in relatively small groups, rarely exceeding 100 people,[57] a dread risk, which kills many people at once, could potentially wipe out one's whole group. Indeed, research found[58] that people's fear peaks for risks killing around 100 people but does not increase if larger groups are killed. Fourth, fearing dread risks can be an ecologically rational strategy.[59] Besides killing a large number of people at a single point in time, dread risks reduce the number of children and young adults who would have potentially produced offspring. Accordingly, people are more concerned about risks killing younger, and hence more fertile, groups.[60] Anxiety and judgmental accuracyEdit The relationship between higher levels of risk perception and "judgmental accuracy" in anxious individuals remains unclear (Joseph I. Constans, 2001). There is a chance that "judgmental accuracy" is correlated with heightened anxiety. Constans conducted a study to examine how worry propensity (and current mood and trait anxiety) might influence college student's estimation of their performance on an upcoming exam, and the study found that worry propensity predicted subjective risk bias (errors in their risk assessments), even after variance attributable to current mood and trait anxiety had been removed.[49] Another experiment suggests that trait anxiety is associated with pessimistic risk appraisals (heightened perceptions of the probability and degree of suffering associated with a negative experience), while controlling for depression.[48] Other considerationsEdit Risk and uncertaintyEdit In his seminal work Risk, Uncertainty, and Profit, Frank Knight (1921) established the distinction between risk and uncertainty. ... Uncertainty must be taken in a sense radically distinct from the familiar notion of Risk, from which it has never been properly separated. The term "risk," as loosely used in everyday speech and in economic discussion, really covers two things which, functionally at least, in their causal relations to the phenomena of economic organization, are categorically different. ... The essential fact is that "risk" means in some cases a quantity susceptible of measurement, while at other times it is something distinctly not of this character; and there are far-reaching and crucial differences in the bearings of the phenomenon depending on which of the two is really present and operating. ... It will appear that a measurable uncertainty, or "risk" proper, as we shall use the term, is so far different from an unmeasurable one that it is not in effect an uncertainty at all. We ... accordingly restrict the term "uncertainty" to cases of the non-quantitive type.:[61] Thus, Knightian uncertainty is immeasurable, not possible to calculate, while in the Knightian sense risk is measurable. Another distinction between risk and uncertainty is proposed by Douglas Hubbard:[62][63] Uncertainty: The lack of complete certainty, that is, the existence of more than one possibility. The "true" outcome/state/result/value is not known. Measurement of uncertainty: A set of probabilities assigned to a set of possibilities. Example: "There is a 60% chance this market will double in five years" Risk: A state of uncertainty where some of the possibilities involve a loss, catastrophe, or other undesirable outcome. Measurement of risk: A set of possibilities each with quantified probabilities and quantified losses. Example: "There is a 40% chance the proposed oil well will be dry with a loss of $12 million in exploratory drilling costs". In this sense, one may have uncertainty without risk but not risk without uncertainty. We can be uncertain about the winner of a contest, but unless we have some personal stake in it, we have no risk. If we bet money on the outcome of the contest, then we have a risk. In both cases there are more than one outcome. The measure of uncertainty refers only to the probabilities assigned to outcomes, while the measure of risk requires both probabilities for outcomes and losses quantified for outcomes. Risk attitude, appetite and toleranceEdit Main article: Risk aversion The terms risk attitude, appetite, and tolerance are often used similarly to describe an organisation's or individual's attitude towards risk-taking. One's attitude may be described as risk-averse, risk-neutral, or risk-seeking. Risk tolerance looks at acceptable/unacceptable deviations from what is expected.[clarification needed] Risk appetite looks at how much risk one is willing to accept. There can still be deviations that are within a risk appetite. For example, recent research finds that insured individuals are significantly likely to divest from risky asset holdings in response to a decline in health, controlling for variables such as income, age, and out-of-pocket medical expenses.[64] Gambling is a risk-increasing investment, wherein money on hand is risked for a possible large return, but with the possibility of losing it all. Purchasing a lottery ticket is a very risky investment with a high chance of no return and a small chance of a very high return. In contrast, putting money in a bank at a defined rate of interest is a risk-averse action that gives a guaranteed return of a small gain and precludes other investments with possibly higher gain. The possibility of getting no return on an investment is also known as the rate of ruin. Risk as a vector quantityEdit Hubbard also argues that defining risk as the product of impact and probability presumes, unrealistically, that decision-makers are risk-neutral.[63] A risk-neutral person's utility is proportional to the expected value of the payoff. For example, a risk-neutral person would consider 20% chance of winning $1 million exactly as desirable as getting a certain $200,000. However, most decision-makers are not actually risk-neutral and would not consider these equivalent choices. This gave rise to prospect theory and cumulative prospect theory. Hubbard proposes to instead describe risk as a vector quantity that distinguishes the probability and magnitude of a risk. Risks are simply described as a set or function[vague] of possible payoffs (gains or losses) with their associated probabilities. This array is collapsed into a scalar value according to a decision-maker's risk tolerance. List of related booksEdit This is a list of books about risk issues. Acceptable Risk Baruch Fischhoff, Sarah Lichtenstein, Paul Slovic, Steven L. Derby, and Ralph Keeney 1984 Against the Gods: The Remarkable Story of Risk Peter L. Bernstein 1996 At risk: Natural hazards, people's vulnerability and disasters Piers Blaikie, Terry Cannon, Ian Davis, and Ben Wisner 1994 Building Safer Communities. Risk Governance, Spatial Planning and Responses to Natural Hazards Urbano Fra Paleo 2009 Dangerous Earth: An introduction to geologic hazards Barbara W. Murck, Brian J. Skinner, Stephen C. Porter 1998 Disasters and Democracy Rutherford H. Platt 1999 Earth Shock: Hurricanes, volcanoes, earthquakes, tornadoes and other forces of nature W. Andrew Robinson 1993 Human System Response to Disaster: An Inventory of Sociological Findings Thomas E. Drabek 1986 Judgment Under Uncertainty: heuristics and biases Daniel Kahneman, Paul Slovic, and Amos Tversky 1982 Mapping Vulnerability: disasters, development, and people Greg Bankoff, Georg Frerks, and Dorothea Hilhorst 2004 Man and Society in Calamity: The Effects of War, Revolution, Famine, Pestilence upon Human Mind, Behavior, Social Organization and Cultural Life Pitirim Sorokin 1942 Mitigation of Hazardous Comets and Asteroids Michael J.S. Belton, Thomas H. Morgan, Nalin H. Samarasinha, Donald K. Yeomans 2005 Natural Disaster Hotspots: a global risk analysis Maxx Dilley 2005 Natural Hazard Mitigation: Recasting disaster policy and planning David Godschalk, Timothy Beatley, Philip Berke, David Brower, and Edward J. Kaiser 1999 Natural Hazards: Earth's processes as hazards, disasters, and catastrophes Edward A. Keller, and Robert H. Blodgett 2006 Normal Accidents. Living with high-risk technologies Charles Perrow 1984 Paying the Price: The status and role of insurance against natural disasters in the United States Howard Kunreuther, and Richard J. Roth 1998 Planning for Earthquakes: Risks, politics, and policy Philip R. Berke, and Timothy Beatley 1992 Practical Project Risk Management: The ATOM Methodology David Hillson and Peter Simon 2012 Reduction and Predictability of Natural Disasters John B. Rundle, William Klein, Don L. Turcotte 1996 Regions of Risk: A geographical introduction to disasters Kenneth Hewitt 1997 Risk Analysis: a quantitative guide David Vose 2008 Risk: An introduction (ISBN 978-0-415-49089-4) Bernardus Ale 2009 Risk and Culture: An essay on the selection of technical and environmental dangers Mary Douglas, and Aaron Wildavsky 1982 Socially Responsible Engineering: Justice in Risk Management (ISBN 978-0-471-78707-5) Daniel A. Vallero, and P. Aarne Vesilind 2006 Swimming with Crocodiles: The Culture of Extreme Drinking Marjana Martinic and Fiona Measham (eds.) 2008 The Challenger Launch Decision: Risky Technology, Culture and Deviance at NASA Diane Vaughan 1997 The Environment as Hazard Ian Burton, Robert Kates, and Gilbert F. White 1978 The Social Amplification of Risk Nick Pidgeon, Roger E. Kasperson, and Paul Slovic 2003 What is a Disaster? New answers to old questions Ronald W. Perry, and Enrico Quarantelli 2005 Floods: From Risk to Opportunity (IAHS Red Book Series) Ali Chavoshian, and Kuniyoshi Takeuchi 2013 The Risk Factor: Why Every Organization Needs Big Bets, Bold Characters, and the Occasional Spectacular Failure Deborah Perry Piscione 2014 Ambiguity aversion Benefit shortfall Countermeasure Early case assessment Event chain methodology Fuel price risk management Global catastrophic risk Global Risk Forum GRF Davos Hazard (risk) Inherent risk Inherent risk (accounting) International Risk Governance Council ISO/PAS 28000 Life-critical system Probabilistic risk assessment Risk compensation Peltzman effect Risk-neutral measure Sampling risk ^ Cline, Preston B. (3 March 2015). "The Merging of Risk Analysis and Adventure Education" (PDF). Wilderness Risk Management. 5 (1): 43–45. Retrieved 12 December 2016. ^ Hansson, Sven Ove; Zalta, Edward N. (Spring 2014). "Risk". The Stanford Encyclopedia of Philosophy. Retrieved 9 May 2014. ^ "risk". Oxford English Dictionary (3rd ed.). Oxford University Press. September 2005. (Subscription or UK public library membership required.) ^ Elias, Victor (2017). The Quest for Ascendant Quality. Sparta, NJ USA: On QUEST. p. 167. ISBN 9780999080115. ^ A Guide to the Project Management Body of Knowledge (4th Edition) ANSI/PMI 99-001-2008 ^ "risk". BusinessDictionary. ^ a b "An Introduction to Factor Analysis of Information Risk (FAIR)", Risk Management Insight LLC, November 2006 Archived 18 November 2014 at the Wayback Machine;. ^ Technical Standard Risk Taxonomy ISBN 1-931624-77-1 Document Number: C081 Published by The Open Group, January 2009. ^ "Risk is a combination of the likelihood of an occurrence of a hazardous event or exposure(s) and the severity of injury or ill health that can be caused by the event or exposure(s)" (OHSAS 18001:2007). ^ ISO/IEC 27005:2008. ^ Rychetnik L, Hawe P, Waters E, Barratt A, Frommer M (July 2004). "A glossary for evidence based public health". J Epidemiol Community Health. 58 (7): 538–45. doi:10.1136/jech.2003.011585. PMC 1732833. PMID 15194712. ^ Gurjar, Bhola Ram; Mohan, Manju (2002). "Environmental Risk Analysis: Problems and Perspectives in Different Countries". Risk: Health, Safety & Environment. 13: 3. Retrieved 23 March 2013. ^ Vallero, Daniel A. (2016). "Environmental Biotechnology: A Biosystems Approach." Amsterdam: Academic Press. ISBN 978-0-12-407776-8. ^ Potter, Patricia (2013). Fundamentals of nursing. St. Louis, Mo: Mosby Elsevier. p. 386. ISBN 9780323079334. ^ Cortada, James W. (4 December 2003). The Digital Hand: How Computers Changed the Work of American Manufacturing, Transportation, and Retail Industries. USA: Oxford University Press. p. 512. ISBN 978-0-19-516588-3. ^ Cortada, James W. (3 November 2005). The Digital Hand: Volume II: How Computers Changed the Work of American Financial, Telecommunications, Media, and Entertainment Industries. USA: Oxford University Press. ISBN 978-0-19-516587-6. ^ Cortada, James W. (6 November 2007). The Digital Hand, Vol 3: How Computers Changed the Work of American Public Sector Industries. USA: Oxford University Press. p. 496. ISBN 978-0-19-516586-9. ^ 44 U.S.C. § 3542(b)(1). ^ Carson, James M.; Elyasiani, Elyas; Mansur, Iqbal (2008). "Market Risk, Interest Rate Risk, and Interdependencies in Insurer Stock Returns: A System-GARCH Model". The Journal of Risk and Insurance. 75 (4): 873–891. CiteSeerX 10.1.1.568.4087. doi:10.1111/j.1539-6975.2008.00289.x. ^ A Positive Approach To Risk Requires Person Centred Thinking, Neill et al., Tizard Learning Disability Review http://pierprofessional.metapress.com/content/vr700311x66j0125/[permanent dead link] ^ John O'Brien cited in Sanderson, H. Lewis, J. A Practical Guide to Delivering Personalisation; Person Centred Practice in Health and Social Care p211 ^ Fischer, Michael Daniel; Ferlie, Ewan (1 January 2013). "Resisting hybridisation between modes of clinical risk management: Contradiction, contest, and the production of intractable conflict". Accounting, Organizations and Society. 38 (1): 30–49. doi:10.1016/j.aos.2012.11.002. ^ Sebastián Martorell, Carlos Guedes Soares, Julie Barnett (2014). Safety, Reliability and Risk Analysis: Theory, Methods and Applications. CRC Press. p. 671. ISBN 9781482266481. CS1 maint: Multiple names: authors list (link) ^ Damodaran, Aswath (2003). Investment Philosophies: Successful Investment Philosophies and the Greatest Investors Who Made Them Work. Wiley. p. 15. ISBN 978-0-471-34503-9. ^ Artzner, P.; Delbaen, F.; Eber, J.-M.; Heath, D. (1999). "Coherent measures of risk". Math. Finance. 9 (3): 203–228. doi:10.1111/1467-9965.00068. ^ Novak S.Y. 2011. Extreme value methods with applications to finance. London: CRC. ISBN 978-1-43983-574-6. ^ Aldridge, I., Krawciw, S., 2017. Real-Time Risk: What Investors Should Know About Fintech, High-Frequency Trading and Flash Crashes. Hoboken: Wiley. ISBN 978-1119318965. ^ McCrank, John (17 October 2012). "Knight Capital posts $389.9 million loss on trading glitch". Reuters. ^ "Basel III: international regulatory framework for banks". www.bis.org. 7 December 2017. ^ Arco van de Ven. Marijn van Daelen, Christoph van der Elst (eds.). Risk Management and Corporate Governance: Interconnections in Law: Chapter: Risk Management from an accounting perspective. pp. 16–17. CS1 maint: Uses editors parameter (link) ^ Julian Talbot and Miles Jakeman Security Risk Management Body of Knowledge, John Wiley & Sons, 2009. ^ Virine, L., & Trumper, M. ProjectThink. Gower. 2013 ^ Virine, L., & Trumper, M. Project Risk Analysis Made Ridiculously Simple. World Scientific Publishing. 2017 ^ Amos Tversky / Daniel Kahneman, 1981. "The Framing of Decisions and the Psychology of Choice."[verification needed] ^ Schatz, J.; Craft, S.; Koby, M.; DeBaun, M. R. (2004). "Asymmetries in visual-spatial processing following childhood stroke". Neuropsychology. 18 (2): 340–352. doi:10.1037/0894-4105.18.2.340. PMID 15099156. ^ Volberg, G.; Hubner, R. (2004). "On the role of response conflicts and stimulus position for hemispheric differences in global/local processing: An ERP study". Neuropsychologia (Submitted manuscript). 42 (13): 1805–1813. doi:10.1016/j.neuropsychologia.2004.04.017. PMID 15351629. ^ Drake, R. A. (2004). Selective potentiation of proximal processes: Neurobiological mechanisms for spread of activation. Medical Science Monitor, 10, 231–234. ^ McElroy, T.; Seta, J. J. (2004). "On the other hand, am I rational? Hemisphere activation and the framing effect". Brain and Cognition. 55 (3): 572–580. doi:10.1016/j.bandc.2004.04.002. PMID 15223204. ^ Krueger, Norris, and Peter R. Dickson. "How believing in ourselves increases risk taking: perceived self-efficacy and opportunity recognition." Decision Sciences 25, no. 3 (1994): 385–400. ^ Arunraj, N.S.; Maiti, J. (2007). "Risk-based maintenance—Techniques and applications". Journal of Hazardous Materials. 142 (3): 653–661. doi:10.1016/j.jhazmat.2006.06.069. PMID 16887261. ^ noun. The state of being protected against the criminal or unauthorized use of electronic data, or the measures taken to achieve this. 'But despite the number of agencies involved, cybersecurity generally seems to have slipped in importance for the administration ^ Commerce, Computer Security Division, Information Technology Laboratory, National Institute of Standards and Technology, U.S. Department of (15 November 2016). "NIST Releases SP 800-160, Systems Security Engineering". csrc.nist.gov. ^ Landsburg, Steven (3 March 2003). "Is your life worth $10 million?". Everyday Economics. Slate. Retrieved 17 March 2008. ^ Wired Magazine, Before the levees break, page 3. ^ Hartley, Catherine A.; Phelps, Elizabeth A. (2012). "Anxiety and Decision-Making". Biological Psychiatry. 72 (2): 113–118. doi:10.1016/j.biopsych.2011.12.027. PMC 3864559. PMID 22325982. ^ Jon Gertner. What Are We Afraid Of, Money 32.5 (2003): 80. ^ Lerner, Jennifer S.; Keltner, Dacher (2000). "Beyond Valence: Toward A Model of Emotion-Specific Influences on Judgment and Choice". Cognition & Emotion. 14 (4): 473–493. CiteSeerX 10.1.1.318.6023. doi:10.1080/026999300402763. ^ a b c d e Jon K. Maner, Norman B. Schmidt, The Role of Risk Avoidance in Anxiety, Behavior Therapy, Volume 37, Issue 2, June 2006, pp. 181–189, ISSN 0005-7894, 10.1016/j.beth.2005.11.003. ^ a b c d Constans, Joseph I. (2001). "Worry propensity and the perception of risk". Behaviour Research and Therapy. 39 (6): 721–729. doi:10.1016/S0005-7967(00)00037-1. ^ a b c Jon K. Maner, J. Anthony Richey, Kiara Cromer, Mike Mallott, Carl W. Lejuez, Thomas E. Joiner, Norman B. Schmidt, Dispositional anxiety and risk-avoidant decision-making, Personality and Individual Differences, Volume 42, Issue 4, March 2007, pp. 665–675, ISSN 0191-8869, 10.1016/j.paid.2006.08.016. ^ Joshua A. Hemmerich, Arthur S. Elstein, Margaret L. Schwarze, Elizabeth Ghini Moliski, William Dale, Risk as feelings in the effect of patient outcomes on physicians' future treatment decisions: A randomized trial and manipulation validation, Social Science & Medicine, Volume 75, Issue 2, July 2012, pp. 367–376, ISSN 0277-9536, 10.1016/j.socscimed.2012.03.020. ^ a b Slovic, P (1987). "Perception of risk". Science. 236 (4799): 280–285. Bibcode:1987Sci...236..280S. doi:10.1126/science.3563507. ^ Gigerenzer G (2004) Dread risk, 11 September, and fatal traffic accidents. Psych Sci 15:286−287. ^ Gaissmaier, W.; Gigerenzer, G. (2012). "9/11, Act II: A fine-grained analysis of regional variations in traffic fatalities in the aftermath of the terrorist attacks". Psychological Science. 23 (12): 1449–1454. doi:10.1177/0956797612447804. PMID 23160203. ^ a b Lichtenstein, S; Slovic, P; Fischhoff, B; Layman, M; Combs, B (1978). "Judged frequency of lethal events". Journal of Experimental Psychology: Human Learning and Memory. 4 (6): 551–578. doi:10.1037/0278-7393.4.6.551. hdl:1794/22549. ^ Öhman, A; Mineka, S (2001). "Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning". Psychol Rev. 108 (3): 483–522. doi:10.1037/0033-295x.108.3.483. PMID 11488376. ^ Hill, KR; Walker, RS; Bozicevic, M; Eder, J; Headland, T; et al. (2011). "Co-residence patterns in hunter-gatherer societies show unique human social structure". Science. 331 (6022): 1286–1289. Bibcode:2011Sci...331.1286H. doi:10.1126/science.1199071. PMID 21393537. ^ Galesic, M; Garcia-Retamero, R (2012). "The risks we dread: A social circle account". PLoS ONE. 7 (4): e32837. Bibcode:2012PLoSO...732837G. doi:10.1371/journal.pone.0032837. PMC 3324481. PMID 22509250. ^ Bodemer, N.; Ruggeri, A.; Galesic, M. (2013). "When dread risks are more dreadful than continuous risks: Comparing cumulative population losses over time". PLOS ONE. 8 (6): e66544. Bibcode:2013PLoSO...866544B. doi:10.1371/journal.pone.0066544. PMC 3694073. PMID 23840503. ^ Wang, XT (1996). "Evolutionary hypotheses of risk-sensitive choice: Age differences and perspective change". Ethol Sociobiol. 17: 1–15. CiteSeerX 10.1.1.201.816. doi:10.1016/0162-3095(95)00103-4. ^ Frank Hyneman Knight "Risk, uncertainty and profit" pg. 19, Hart, Schaffner, and Marx Prize Essays, no. 31. Boston and New York: Houghton Mifflin. 1921. ^ Douglas Hubbard "How to Measure Anything: Finding the Value of Intangibles in Business" pg. 46, John Wiley & Sons, 2007. ^ a b Douglas Hubbard "The Failure of Risk Management: Why It's Broken and How to Fix It, John Wiley & Sons, 2009. Page 22 of https://canvas.uw.edu/courses/1066599/files/37549842/download?verifier=ar2VjVOxCU8sEQr23I5LEBpr89B6fnwmoJgBinqj&wrap=1 ^ Federal Reserve Bank of Chicago, Health and the Savings of Insured versus Uninsured, Working-Age Households in the U.S., November 2009 Referred literatureEdit James Franklin, 2001: The Science of Conjecture: Evidence and Probability Before Pascal, Baltimore: Johns Hopkins University Press. John Handmer and Paul James (2005). "Trust Us and Be Scared: The Changing Nature of Risk". Global Society. 21 (1): 119–30. CS1 maint: Uses authors parameter (link) Niklas Luhmann, 1996: Modern Society Shocked by its Risks (= University of Hong Kong, Department of Sociology Occasional Papers 17), Hong Kong, available via HKU Scholars HUB Historian David A. Moss' book When All Else Fails explains the US government's historical role as risk manager of last resort. Bernstein P. L. Against the Gods ISBN 0-471-29563-9. Risk explained and its appreciation by man traced from earliest times through all the major figures of their ages in mathematical circles. Rescher, Nicholas (1983). A Philosophical Introduction to the Theory of Risk Evaluation and Measurement. University Press of America. Porteous, Bruce T.; Pradip Tapadar (December 2005). Economic Capital and Financial Risk Management for Financial Services Firms and Conglomerates. Palgrave Macmillan. ISBN 978-1-4039-3608-0. Tom Kendrick (2003). Identifying and Managing Project Risk: Essential Tools for Failure-Proofing Your Project. AMACOM/American Management Association. ISBN 978-0-8144-0761-5. Hillson D. (2007). Practical Project Risk Management: The Atom Methodology. Management Concepts. ISBN 978-1-56726-202-5. Kim Heldman (2005). Project Manager's Spotlight on Risk Management. Jossey-Bass. ISBN 978-0-7821-4411-6. Dirk Proske (2008). Catalogue of risks – Natural, Technical, Social and Health Risks. Eos Transactions. 90. Springer. p. 18. Bibcode:2009EOSTr..90...18E. doi:10.1029/2009EO020009. ISBN 978-3-540-79554-4. Gardner D. Risk: The Science and Politics of Fear, Random House Inc. (2008) ISBN 0-7710-3299-4. Novak S.Y. Extreme value methods with applications to finance. London: CRC. (2011) ISBN 978-1-43983-574-6. Hopkin P. Fundamentals of Risk Management. 2nd Edition. Kogan-Page (2012) ISBN 978-0-7494-6539-1 Articles and papersEdit Cevolini, A (2015). ""Tempo e decisione. Perché Aristotele non-ha un concetto di rischio?" PDF". Divus Thomas. 118 (1): 221–249. Clark, L.; Manes, F.; Antoun, N.; Sahakian, B. J.; Robbins, T. W. (2003). "The contributions of lesion laterality and lesion volume to decision-making impairment following frontal lobe damage". Neuropsychologia. 41 (11): 1474–1483. doi:10.1016/s0028-3932(03)00081-2. Cokely, E. T.; Galesic, M.; Schulz, E.; Ghazal, S.; Garcia-Retamero, R. (2012). "Measuring risk literacy: The Berlin Numeracy Test" (PDF). Judgment and Decision Making. 7: 25–47. Drake, R. A. (1985). "Decision making and risk taking: Neurological manipulation with a proposed consistency mediation". Contemporary Social Psychology. 11: 149–152. Drake, R. A. (1985). "Lateral asymmetry of risky recommendations". Personality and Social Psychology Bulletin. 11 (4): 409–417. doi:10.1177/0146167285114007. Gregory, Kent J.; Bibbo, Giovanni; Pattison, John E. (2005). "A Standard Approach to Measurement Uncertainties for Scientists and Engineers in Medicine". Australasian Physical and Engineering Sciences in Medicine. 28 (2): 131–139. doi:10.1007/bf03178705. Hansson, Sven Ove. (2007). "Risk", The Stanford Encyclopedia of Philosophy (Summer 2007 Edition), Edward N. Zalta (ed.), forthcoming [1]. Holton, Glyn A. (2004). "Defining Risk", Financial Analysts Journal, 60 (6), 19–25. A paper exploring the foundations of risk. (PDF file). Knight, F. H. (1921) Risk, Uncertainty and Profit, Chicago: Houghton Mifflin Company. (Cited at: [2], § I.I.26.). Kruger, Daniel J., Wang, X.T., & Wilke, Andreas (2007) "Towards the development of an evolutionarily valid domain-specific risk-taking scale" Evolutionary Psychology (PDF file). Metzner-Szigeth, Andreas (2009). "Contradictory approaches? On realism and constructivism in the social sciences research on risk, technology and the environment". Futures. 41 (3): 156–170. doi:10.1016/j.futures.2008.09.017. Miller, L (1985). "Cognitive risk taking after frontal or temporal lobectomy I. The synthesis of fragmented visual information". Neuropsychologia. 23 (3): 359–369. doi:10.1016/0028-3932(85)90022-3. Miller, L.; Milner, B. (1985). "Cognitive risk taking after frontal or temporal lobectomy II. The synthesis of phonemic and semantic information". Neuropsychologia. 23 (3): 371–379. doi:10.1016/0028-3932(85)90023-5. Neill, M. Allen, J. Woodhead, N. Reid, S. Irwin, L. Sanderson, H. 2008 "A Positive Approach to Risk Requires Person Centred Thinking" London, CSIP Personalisation Network, Department of Health. Available from: https://web.archive.org/web/20090218231745/http://networks.csip.org.uk/Personalisation/Topics/Browse/Risk/ [Accessed 21 July 2008]. Wildavsky, Aaron; Wildavsky, Adam (2008). "Risk and Safety". In David R. Henderson (ed.). Concise Encyclopedia of Economics (2nd ed.). Indianapolis: Library of Economics and Liberty. ISBN 978-0865976658. OCLC 237794267. riskat Wikipedia's sister projects Risk – The entry of the Stanford Encyclopedia of Philosophy Retrieved from "https://en.wikipedia.org/w/index.php?title=Risk&oldid=905057181"
CommonCrawl
Based on black hole thermodynamics, shouldn't empty space contain infinite energy? According to "Hawking radiation", Wikipedia [links omitted]: In SI units, the radiation from a Schwarzschild black hole is blackbody radiation with temperature $${\displaystyle T={\frac {\hbar c^{3}}{8\pi GMk_{\text{B}}}}\;\quad \left(\approx {\frac {1.227\times 10^{23}\;{\text{kg}}}{M}}\;{\text{K}}=6.169\times 10^{-8}\;{\text{K}}\times {\frac {M_{\odot }}{M}}\right)\,,} $$ where $\hbar$ is the reduced Planck constant, $c$ is the speed of light, $k_{\text{B}}$ is the Boltzmann constant, $G$ is the gravitational constant, $M_☉$ is the solar mass, and $M$ is the mass of the black hole. By taking the limit of $T$ as $M$ goes to zero, the following is found: $$\lim_{M\to0^+} T=\lim_{M\to0^+}{\hbar c^3\over{8\pi G k_bM}}=+\infty$$ Wouldn't this mean that empty space would have infinite energy? As when $M=0$ the Schwarzschild radius is also $0$, so every point in space would be paradoxically hot. I know I'm probably wrong, I just don't know why I'm wrong. thermodynamics black-holes hawking-radiation $\begingroup$ If $M$ is zero what is supposed to be radiating (nothing ?) and where does the energy for this radiation come from (nowhere ?) ? $\endgroup$ – StephenG Oct 1 '18 at 21:31 $\begingroup$ From your link " Unlike most objects, a black hole's temperature increases as it radiates away mass. The rate of temperature increase is exponential, with the most likely endpoint being the dissolution of the black hole in a violent burst of gamma rays. A complete description of this dissolution requires a model of quantum gravity, however, as it occurs when the black hole approaches Planck mass and Planck radius." $\endgroup$ – Bruce Greetham Oct 1 '18 at 21:46 $\begingroup$ Hawking's original computation involves quantum fields in curved spacetime, where curvatures are small compared to the Planck length. I believe that in order to take a limit all the way to $M=0$, one must know how Planck scale quantum gravity works. Maybe there's something Hawking says in his article "Black hole explosions?" but I can't open it at the moment: nature.com/articles/248030a0 $\endgroup$ – Avantgarde Oct 1 '18 at 22:16 $\begingroup$ Why should $T=\infty,\,M=0$ imply infinite energy? It implies zero lifetime, to get rid or the energy $Mc^2=0$ by radiation. $\endgroup$ – J.G. Oct 2 '18 at 9:46 $\begingroup$ Why don't we just mention that points in empty space just aren't blackholes, so you just can't apply black holes formulas ? Is this thought completely erroneous ? $\endgroup$ – Guiroux Oct 2 '18 at 9:57 Wouldn't this mean that empty space would have infinite energy? So ignoring quantum issues (and in the absence of a complete theory of quantum gravity we have no choice) and staying strictly with the classical approach let's consider the problem. Regardless of what amount of power is radiated by the black hole, that power is removed from the energy of the black hole. But the black holes you are talking about have zero energy and so there is no way for them to power hawking radiation. The mistake you are making is ignoring that nature balances it's books and in this case the balance is that you can't reduce the mass below zero. There's another reason why your logic is failing. The entire idea of Hawking radiation depends on the existence of a curved space time and an event horizon. But when $M=0$ we just get a flat spacetime. There is no event horizon. And note that an $R=0$ event horizon would mean there was nothing inside the black hole - no volume, nothing. The formula you are using for temperature is based on a model which starts out with a non-zero positive mass and then makes a first order approximation close to the event horizon (Wikipedia has a description of this). But that formula does not apply when you're using $M\to 0$. Again, you're using an approximation based on the assumption of a curved spacetime ($M>0$) and applying it outside it's "designed purpose" as an approximation. StephenGStephenG Cute question. I don't think it's possible to give a definite answer unless we decide on a definite description of the manner in which the limit is to be taken. If we're just considering the limit of an asymptotically flat spacetime consisting of a single black hole, as the black hole's mass goes to zero, then I think the answer is pretty straightforward. The radiated power is $P\propto M^{-2}\propto r_s^{-2}$, where $r_s$ is the Schwarzschild radius. If this power is considered to be emitted from the surface of a sphere with radius equal to $r_s$, then the power per unit area at $r>r_s$ is $I\propto Pr^{-2} \propto r_s^{-2} r^{-2}$. The average distance of a randomly chosen observer from the black hole is infinite, so we should consider the limit $r\rightarrow\infty$. So now the question arises as to whether we should compute $$\lim_{r\rightarrow\infty}\lim_{r_s\rightarrow0} I$$ $$\lim_{r_s\rightarrow0}\lim_{r\rightarrow\infty} I.$$ These are both indeterminate forms, so it sort of doesn't matter which one we evaluate -- neither one gives a meaningful answer without some physics input. I think the additional physics input comes from the fact that $r_s$ probably can't be any smaller than the Planck length $\ell$. Therefore we really shouldn't be taking the limit as $r_s\rightarrow0$ but as $r_s\rightarrow\ell$. Then the double limit is zero. Another way of approaching the whole thing is to imagine a theory of quantum gravity in which the vacuum has virtual black holes popping in and out of existence. Then presumably one of the things you want from such a theory (which we don't yet possess) is that it doesn't predict infinite density of photons for the vacuum. I would imagine that this would happen because the density of virtual Planck-scale black holes would be small. When the mass drops down to 228 metric tonnes, that's the signal that exactly one second remains. The event horizon size at the time will be 340 yoctometers, or 3.4 × 10^-22 meters: the size of one wavelength of a photon with an energy greater than any particle the LHC has ever produced. But in that final second, a total of 2.05 × 10^22 Joules of energy, the equivalent of five million megatons of TNT, will be released. It's as though a million nuclear fusion bombs went off all at once in a tiny region of space; that's the final stage of black hole evaporation. https://www.forbes.com/sites/startswithabang/2017/05/20/ask-ethan-what-happens-when-a-black-holes-singularity-evaporates/#3d39e14b7c8c The point is, in reality, M never hits zero. Theory is great but we can't take it literally. CramerTVCramerTV $\begingroup$ This doesn't really make sense. The quoted paragraph doesn't say that $M$ never reaches zero, and in fact $M$ does reach zero after a finite time. (This is guaranteed, because the rate of mass loss is increasing with time.) $\endgroup$ – Ben Crowell Oct 2 '18 at 0:32 $\begingroup$ @BenCrowell, Perhaps I'm reading it wrong then. The way I understood it is that the mass of the black hole never gets to zero. The singularity explodes before its mass reaches zero, converting its remaining mass into pure energy, losing its event horizon, and thus is no longer a black hole. Since it isn't a black hole when M=0 using a black hole equation at that limit doesn't make sense. Thus empty space does not, nor should it, conform to a black hole equation. $\endgroup$ – CramerTV Oct 2 '18 at 1:21 Not the answer you're looking for? Browse other questions tagged thermodynamics black-holes hawking-radiation or ask your own question. Theoretical basis for black hole evaporation From where (in space-time) does Hawking radiation originate? Black hole complementarity - absorption of Hawking radiation hawking radiation Falling into the black hole: a picture from the infinite distance How black hole lose mass? How much energy is released by an evaporating black hole in the last second of its life? Can you ride Hawking radiation away from a black hole? Do black holes release space through Hawking Radiation? Quantum tunneling into a black hole
CommonCrawl
math problem How many ordered pairs (x,y) of positive integers satisfy the inequality 4x + 5y < 200? what you are given is $ \{ 4x + 5y < 200 \} $ now, how many pairs of coordinates are there to satisfy that, and also $ ∈ \mathbb{Z} $? a graph $ \{ 4x + 5y = 200 \} $ -- you will see a line -- the coordinates we wont need are the ones which are larger than 200, so we just expunge the top part, where the number gets higher right? (NOTE: 0 is not a positive number) check the image do not forget to include the coordinates where the purple line is on as well at this point just count them all if you want an answer -- unless someones manage to find an easier way LOL also, do not count the ones where the line intersects, as those would make it $=200$ $ \overset{. \: .}{\smile} $ UsernameTooShort Jun 28, 2021 edited by UsernameTooShort Jun 28, 2021 As UTS said, we can solve this geometrically using something known as Pick's Theorem If we graph 4x + 5y = 200........this will form a first quadrant triangle with a base of 50 and a height of 40 The area = (1/2)(40(50) = 1000 Pick's Theorem says that the area = Number of lattice points in the interior of the triangle + number of lattice points on the boundary of the triangle / 2 - 1 Where a lattice point is a point with integer coordinates The number of lattice points on the boundary of this triangle = 41 + 50 + 9 = 100 1000 = number of lattice points in the interior + 100/2 -1 1000 = number of lattice points in the interior + 49 1000 - 49 = number of latttice points in the interior 951 = number of lattice points in the interior = number of ordered pairs of positive integers satisfying 4x + 5y < 200 See the graph here : https://www.desmos.com/calculator/tkypy0yfbs CPhill Jun 28, 2021 edited by CPhill Jun 28, 2021
CommonCrawl
You are here: Home ∼ Weekly Papers on Quantum Foundations (17) Weekly Papers on Quantum Foundations (17) Published by editor on April 24, 2021 Revisiting the compatibility problem between the gauge principle and the observability of the canonical orbital angular momentum in the Landau problem. (arXiv:2104.10885v1 [quant-ph]) 上午9:11 | M. Wakamatsu, Y. Kitadono, L.-P. Zou, P.-M. Zhang | quant-ph updates on arXiv.org As is widely-known, the eigen-functions of the Landau problem in the symmetric gauge are specified by two quantum numbers. The first is the familiar Landau quantum number $n$, whereas the second is the magnetic quantum number $m$, which is the eigen-value of the canonical orbital angular momentum (OAM) operator of the electron. The eigen-energies of the system depend only on the first quantum number $n$, and the second quantum number $m$ does not correspond to any direct observables. This seems natural since the canonical OAM is generally believed to be a {\it gauge-variant} quantity, and observation of a gauge-variant quantity would contradict a fundamental principle of physics called the {\it gauge principle}. In recent researches, however, Bliohk et al. analyzed the motion of helical electron beam along the direction of a uniform magnetic field, which was mostly neglected in past analyses of the Landau states. Their analyses revealed highly non-trivial $m$-dependent rotational dynamics of the Landau electron, but the problem is that their papers give an impression that the quantum number $m$ in the Landau eigen-states corresponds to a genuine observable. This compatibility problem between the gauge principle and the observability of the quantum number $m$ in the Landau eigen-states was attacked in our previous letter paper. In the present paper, we try to give more convincing answer to this delicate problem of physics, especially by paying attention not only to the {\it particle-like} aspect but also to the {\it wave-like} aspect of the Landau electron. A note on "Algebraic approach to Casimir force between two $\delta$-like potentials" (K. Ziemian, Ann. Henri Poincar\'e, Online First, 2021). (arXiv:2104.11029v1 [quant-ph]) 上午9:11 | Davide Fermi, Livio Pizzocchero (Universita' di Milano) | quant-ph updates on arXiv.org We comment on the recent work [1], and on its relations with our papers [2,3] cited therein. In particular we show that, contrarily to what stated in [1], the Casimir energy density determined therein in the case of a single delta-like singularity coincides with the energy density obtained previously in our paper [2] using a different approach. Conflicts Between Science and Religion: Epistemology to the Rescue. (arXiv:2104.10776v1 [physics.hist-ph]) 上午9:11 | physics.hist-ph updates on arXiv.org Authors: Moorad Alexanian Both Albert Einstein and Erwin Schr\"{o}dinger have defined what science is. Einstein includes not only physics, but also all natural sciences dealing with both organic and inorganic processes in his definition of science. According to Schr\"{o}dinger, the present scientific worldview is based on the two basic attitudes of comprehensibility and objectivation. On the other hand, the notion of religion is quite equivocal and unless clearly defined will easily lead to all sorts of misunderstandings. Does science, as defined, encompass the whole of reality? More importantly, what is the whole of reality and how do we obtain data for it? The Christian worldview considers a human as body, mind, and spirit (soul), which is consistent with Cartesian ontology of only three elements: matter, mind, and God. Therefore, is it possible to give a precise definition of science showing that the conflicts are actually apparent and not real? The Mereology of Thermodynamic Equilibrium. (arXiv:2104.11140v1 [physics.hist-ph]) Authors: Michael te Vrugt The special composition question (SCQ), which asks under which conditions objects compose a further object, establishes a central debate in modern metaphysics. Recent successes of inductive metaphysics, which studies the implications of the natural sciences for metaphysical problems, suggest that insights into the SCQ can be gained by investigating the physics of composite systems. In this work, I show that the minus first law of thermodynamics, which is concerned with the approach to equilibrium, leads to a new approach to the SCQ, the thermodynamic composition principle (TCP): Multiple systems in (generalized) thermal contact compose a single system. This principle, which is justified based on a systematic classification of possible mereological models for thermodynamic systems, can form the basis of an inductive argument for universalism. A formal analysis of the TCP is provided on the basis of mereotopology, which is a combination of mereology and topology. Here, "thermal contact" can be analyzed using the mereotopological predicate "self-connectedness". Self-connectedness has to be defined in terms of mereological sums to ensure that scattered objects cannot be self-connected. Gravitational Footprints of Black Holes and Their Microstate Geometries. (arXiv:2104.10686v1 [hep-th]) 上午9:10 | gr-qc updates on arXiv.org Authors: Ibrahima Bah, Iosif Bena, Pierre Heidmann, Yixuan Li, Daniel R. Mayerson We construct a family of non-supersymmetric extremal black holes and their horizonless microstate geometries in four dimensions. The black holes can have finite angular momentum and an arbitrary charge-to-mass ratio, unlike their supersymmetric cousins. These features make them and their microstate geometries astrophysically relevant. Thus, they provide interesting prototypes to study deviations from Kerr solutions caused by new horizon-scale physics. In this paper, we compute the gravitational multipole structure of these solutions and compare them to Kerr black holes. The multipoles of the black hole differ significantly from Kerr as they depend non-trivially on the charge-to-mass ratio. The horizonless microstate geometries have the same multipoles as their corresponding black hole, with small deviations set by the scale of their microstructure. Beyond the Equivalence Principle: Gravitational Magnetic Monopoles. (arXiv:2104.11063v1 [gr-qc]) Authors: Mario Novello, Angelo E. S. Hartmann We review the hypothesis of the existence of gravitational magnetic monopoles (H-pole for short) defined in analogy with the Dirac's hypothesis of magnetic monopoles in electrodynamics. These hypothetical dual particles violate the equivalence principle and are accelerated by a gravitational field. We propose an expression for the gravitational force exerted upon an H-pole. According to GR ordinary matter (which we call E-poles) follows geodesics in a background metric. The dual H-poles follows geodesics in an effective metric. New Results on Vacuum Fluctuations: Accelerated Detector versus Inertial Detector in a Quantum Field. (arXiv:2104.04142v2 [quant-ph] CROSS LISTED) Authors: I-Chin Wang We investigate the interaction between a moving detector and a quantum field, especially about how the trajectory of the detector would affect the vacuum fluctuations when the detector is moving in a quantum field (Unruh effect). We focus on two moving detectors system for the future application in quantum teleportation. We find that the trajectory of an uniformly accelerated detector in Rindler space can't be extended to the trajectory that a detector moves at constant velocity. Based on the past work, we redo the calculations and find that a term is missing in the past calculations, also we find that there are some restrictions on the values for the parameters in the solutions. Besides, without including the missing term, the variance from the quantum field for the inertial detector will be zero and is unlikely for such system. Combining all these points, there is a difference on the two-point correlation function between the inertial detector and accelerated detector in early-time region. The influence from proper acceleration can be seen in the two-point correlation functions. This might play a role in the quantum teleportation process and worth to study thoroughly. Observing a superposition 2021年4月23日 星期五 上午8:00 | Latest Results for Synthese The bare theory is a no-collapse version of quantum mechanics which predicts certain puzzling results for the introspective beliefs of human observers of superpositions. The bare theory can be interpreted to claim that an observer can form false beliefs about the outcome of an experiment which produces a superpositional result. It is argued that, when careful consideration is given to the observer's belief states and their evolution, the observer does not end up with the beliefs claimed. This result leads to questions about whether there can be any allure for no-collapse theories as austere as the bare theory. Simulating the Same Physics with Two Distinct Hamiltonians 2021年4月22日 星期四 下午6:00 | Karol Gietka, Ayaka Usui, Jianqiao Deng, and Thomas Busch | PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): Karol Gietka, Ayaka Usui, Jianqiao Deng, and Thomas Busch A new framework allows one to use a quantum simulation of one Hamiltonian to study another. [Phys. Rev. Lett. 126, 160402] Published Thu Apr 22, 2021 Non-equilibrium Thermodynamics and the Free Energy Principle in Biology 2021年4月22日 星期四 下午3:53 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Palacios, Patricia and Colombo, Matteo (2021) Non-equilibrium Thermodynamics and the Free Energy Principle in Biology. [Preprint] Russell's The Analysis of Matter as the First Book on Quantum Gravity 2021年4月21日 星期三 下午3:24 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Mikki, Said (2021) Russell's The Analysis of Matter as the First Book on Quantum Gravity. [Preprint] The Democratization of Science Kurtulmus, Faik (2021) The Democratization of Science. [Preprint] Perspectival QM and Presentism: a New Paradigm Merriam, Paul (2021) Perspectival QM and Presentism: a New Paradigm. [Preprint] Exact Thermalization Dynamics in the "Rule 54" Quantum Cellular Automaton 2021年4月19日 星期一 下午6:00 | Katja Klobas, Bruno Bertini, and Lorenzo Piroli | PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): Katja Klobas, Bruno Bertini, and Lorenzo Piroli New studies provide analytical descriptions and exact solutions for various aspects of thermodynamics in quantum many-body systems. [Phys. Rev. Lett. 126, 160602] Published Mon Apr 19, 2021 Philosophy of Science in China: Politicized, De-politicized, and Re-politicized 2021年4月19日 星期一 下午3:06 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Guo, Yuanlin and Ludwig, David (2021) Philosophy of Science in China: Politicized, De-politicized, and Re-politicized. [Preprint] An Infinite Lottery Paradox Norton, John D. and Parker, Matthew W. (2021) An Infinite Lottery Paradox. [Preprint] A new paradox and the reconciliation of Lorentz and Galilean transformations 2021年4月19日 星期一 上午8:00 | Latest Results for Synthese One of the most debated problems in the foundations of the special relativity theory is the role of conventionality. A common belief is that the Lorentz transformation is correct but the Galilean transformation is wrong (only approximately correct in low speed limit). It is another common belief that the Galilean transformation is incompatible with Maxwell equations. However, the "principle of general covariance" in general relativity makes any spacetime coordinate transformation equally valid. This includes the Galilean transformation as well. This renders a new paradox. This new paradox is resolved with the argument that the Galilean transformation is equivalent to the Lorentz transformation. The resolution of this new paradox also provides the most straightforward resolution of an older paradox which is due to Selleri in (Found Phys Lett 10:73–83, 1997). I also present a consistent electrodynamics formulation including Maxwell equations and electromagnetic wave equations under the Galilean transformation, in the exact form for any high speed, rather than in low speed approximation. Electrodynamics in rotating reference frames is rarely addressed in textbooks. The presented formulation of electrodynamics under the Galilean transformation even works well in rotating frames if we replace the constant velocity \(\mathbf {v}\) with \(\mathbf {v}=\varvec{\omega }\times \mathbf {r}\) . This provides a practical tool for applications of electrodynamics in rotating frames. When electrodynamics is concerned, between two inertial reference frames, both Galilean and Lorentz transformations are equally valid, but the Lorentz transformation is more convenient. In rotating frames, although the Galilean electrodynamics does not seem convenient, it could be the most convenient formulation compared with other transformations, due to the intrinsic complex nature of the problem. Fighting about frequency: an international journal for epistemology, methodology and philosophy of science Scientific disputes about how often different processes or patterns occur are relative frequency controversies. These controversies occur across the sciences. In some areas—especially biology—they are even the dominant mode of dispute. Yet they depart from the standard picture of what a scientific controversy is like. In fact, standard philosophical accounts of scientific controversies suggest that relative frequency controversies are irrational or lacking in epistemic value. This is because standard philosophical accounts of scientific controversies often assume that in order to be rational, a scientific controversy must (a) reach a resolution and (b) be about a scientifically interesting question. Relative frequency controversies rarely reach a resolution, however, and some scientists and philosophers are skeptical that these controversies center on scientifically interesting questions. In this paper, I provide a novel account of the epistemic contribution that relative frequency controversies make to science. I show that these controversies are rational in the sense of furthering the epistemic aims of the scientific communities in which they occur. They do this despite rarely reaching a resolution, and independent of whether the controversies are about scientifically interesting questions. This means that assumptions (a) and (b) about what is required for a controversy to be rational are wrong. Controversies do not need to reach a resolution in order to be rational. And they do not need to be about anything scientifically interesting in order to make valuable epistemic contributions to science. Barad, Bohr, and quantum mechanics 2021年4月18日 星期日 上午8:00 | Latest Results for Synthese The last decade has seen an increasing number of references to quantum mechanics in the humanities and social sciences. This development has in particular been driven by Karen Barad's agential realism: a theoretical framework that, based on Niels Bohr's interpretation of quantum mechanics, aims to inform social theorizing. In dealing with notions such as agency, power, and embodiment as well as the relation between the material and the discursive level, the influence of agential realism in fields such as feminist science studies and posthumanism has been profound. However, no one has hitherto paused to assess agential realism's proclaimed quantum mechanical origin including its relation to the writings of Niels Bohr. This is the task taken up here. We find that many of the implications that agential realism allegedly derives from a Bohrian interpretation of quantum mechanics dissent from Bohr's own views and are in conflict with those of other interpretations of quantum mechanics. Agential realism is at best consistent with quantum mechanics and consequently, it does not capture what quantum mechanics in any strict sense implies for social science or any other domain of inquiry. Agential realism may be interesting and thought provoking from the perspective of social theorizing, but it is neither sanctioned by quantum mechanics nor by Bohr's authority. This conclusion not only holds for agential realism in particular, it also serves as a general warning against the other attempts to use quantum mechanics in social theorizing. The landscape and the multiverse: What's the problem? As a candidate theory of quantum gravity, the popularity of string theory has waxed and waned over the past four decades. One current source of scepticism is that the theory can be used to derive, depending upon the input geometrical assumptions that one makes, a vast range of different quantum field theories, giving rise to the so-called landscape problem. One apparent way to address the landscape problem is to posit the existence of a multiverse; this, however, has in turn drawn heightened attention to questions regarding the empirical testability and predictivity of string theory. We argue first that the landscape problem relies on dubious assumptions and does not motivate a multiverse hypothesis. Nevertheless, we then show that the multiverse hypothesis is scientifically legitimate and could be coupled to string theory for other empirical reasons. Looking at various cosmological approaches, we offer an empirical criterion to assess the scientific status of multiverse hypotheses The First Droplet in a Cloud Chamber Track Jonathan F. Schonfeld Foundations of Physics volume 51, Article number: 47 (2021) In a cloud chamber, the quantum measurement problem amounts to explaining the first droplet in a charged-particle track; subsequent droplets are explained by Mott's 1929 wave-theoretic argument about collision-induced wavefunction collimation. I formulate a mechanism for how the first droplet in a cloud chamber track arises, making no reference to quantum measurement axioms. I look specifically at tracks of charged particles emitted in the simplest slow decays, because I can reason about rather than guess the form that wave packets take. The first visible droplet occurs when a randomly occurring, barely-subcritical vapor droplet is pushed past criticality by ionization triggered by the faint wavefunction of the emitted charged particle. This is possible because potential energy incurred when an ionized vapor molecule polarizes the other molecules in a droplet can balance the excitation energy needed for the emitted charged particle to create the ion in the first place. This degeneracy is a singular condition for Coulombic scattering, leading to infinite or near-infinite ionization cross sections, and from there to an emergent Born rule in position space, but not an operator projection as in the projection postulate. Analogous mechanisms may explain canonical quantum measurement behavior in detectors such as ionization chambers, proportional counters, photomultiplier tubes or bubble chambers. This work is important because attempts to understand canonical quantum measurement behavior and its limitations have become urgent in view of worldwide investment in quantum computing and in searches for super-rare processes (e.g., proton decay). Posted in @all Weekly Papers Article written by editor active 2 days, 1 hour ago
CommonCrawl
Using Data To Predict Data Science in 2015 This is the time of the year when pundits make their 2015 predictions. But to make predictions about Data Science, shouldn't one use data? Here are four charts from Google Trends that show trending performance of various data science technologies. Apache Spark really is overtaking Apache Hadoop. In this R vs. IPython Notebook chart, we should just gather the trends rather than the absolute magnitudes. "R" is notoriously difficult to Google for, and "R Cran" is just one of the many tricks R users employ to Google for information about R. And, sadly, Google Trends has no way to additively combine search trends together (e.g. "R Cran" OR "R Project"). But, we can still see that IPython Notebook is skyrocketing upward while R is sagging. This is a little hard to read and requires some explaining. The former name for "Apache Storm" was "Twitter Storm" when Twitter first open-sourced Storm onto GitHub in 2011. But "Twitter Storm" has another common usage, which is a "storm of tweets" such as about a celebrity. I'm guessing about half the searches for "Twitter Storm" are for this latter usage. The takeaway is that Storm got a two-year head start on Spark Streaming and has been chugging away ever since. Part of the reason is that Spark Streaming, despite the surge in popularity of base Spark, had a lot of catching up to do to Storm in terms of graceful handling of errors and graceful shutdown/restart. A lot of that is addressed in the new HA Spark Streaming features introduced in Spark 1.2.0, released a week ago. But the other interesting trend is that the academic term "complex event processing" is falling away in favor of the more industry-oriented terms "Storm" and "Spark Streaming". People forget that "Machine Learning" was quite popular back in the dot-com era. And then it started to fade. That is, until Geoffrey Hinton's invention of deep learning in 2006. That seems to have lifted the popularity of machine learning in general. Well, at least we can say there's a correlation. The other interesting thing is the very recent (within the past month) uptick in interest in DeepMind. Of course there was a barrage of interest in October when the over-hyped headlines blared "mimics human". But I think people only this past month started getting past the hype and started looking at the actual DeepMind paper which is interesting because it shows how they added state to a neural network, and that that is how they achieved "short term memory". Posted by Michael Malak at 10:50 PM No comments: Labels: BigData, DataScience, iPython, MachineLearning, Spark, Streaming Neuromorphic vs. Neural Net The diagram of biological brain waves comes from med.utah.edu and the diagram of an artificial neural network neuron comes from hemming.se Brain Artificial Neural Network Asynchronous Global synchronous clock Stochastic Deterministic Shaped waves Scalar values Storage and compute synonymous Storage and compute separate Training is a Mystery Backpropagation Adaptive network topology Fixed network Cycles in topology Cycle-free topology The table above lists the differences between a regular artificial neural network (feed-forward non-spiking, to be specific) and a biological brain. An artificial neural network (ANN) is so far in architecture and function from a biological brain that attempts to simulate a brain in silicon go by a different term altogether: neuromorphic In the table above, if the last row is modified to allow a neural network to have cycles in its network topology, then it becomes known as a recurrent neural network -- still not quite neuromorphic. But by also modifying the first row of the table to remove the global synchronous clock from neural networks, IBM's TrueNorth chip announced August 2014 claims the neuromorphic moniker. (Asynchronous neural networks are also called spiking neural networks (SNN), but TrueNorth combines the properties of both RNNs and SNNs.) The TrueNorth chip sports one million neurons and 256 million synapses. But you can't buy one. The closest you can come today perhaps is to use an FPAA, a field-programmable analog array, the analog version of an FPGA. But FPAAs haven't scaled nearly as highly as FPGAs. The largest FPAA is the RASP 2.9. The image of its die below comes from a thesis Contributions to Neuromorphic and Reconfigurable Circuits and Systems. It has only 78 CABs (Computational Analog Block), contrasted to the largest FPGAs which have over one million logic elements. Researchers in 2013 were able to simulate 18 neuromorphic neurons with this RASP 2.9 analog FPAA chip. The human brain has 100 billion neurons, so it would hypothetically take 100,000 TrueNorth chips to approach equivalence, based on number of neurons alone. Of course, the other factors, in particular the variable wave shape of biological neurons, would like put any TrueNorth simulation of a brain at a great disadvantage. A lot more information can be carried in a wave shape than in a single scalar value.In the diagram at the top of this blog post, the different wave shapes resulted from showing an animal lights spots of different diameters. An artificial neural network, in contrast, would require N number of output neurons to represent N different distinct diameters. But with an analog FPAA, perhaps neurons that support wave shapes could be simulated, even if for now one may be limited to a dozen or so neurons. But then there is the real mystery: how a biological brain learns, and by extension how to train a neuromorphic system. Posted by Michael Malak at 10:22 AM No comments: Labels: Artificial Intelligence Single GPU-Powered Node 4x Faster Than 50-node Spark Cluster The above chart comes from a new dissertation out of Berkeley entitled High Performance Machine Learning through Codesign and Rooflining. Huasha Zhao and John F. Canny demonstrate that for the PageRank problem, their custom GPU-optimized matrix library they called BIDMat outperforms a 50-node Spark cluster by a factor of four. Their single GPU-powered node had two dual-GPU Nvidia cards for a total of four GPUs. And BIDMat is just one component of their full BIDMach software stack illustrated below (illustration also from their dissertation). Intel MKL and GPU/Cuda are of course off-the-shelf libraries. Butterfly mixing is a new 2013 technique by the same two authors that updates a machine learning model "incrementally" by using small subsets of training data and propagating model changes to neighboring subsets. They do not state it explicitly, but these network communication diagrams between the small subsets resemble the butterfly steps in the Fast Fourier Transform algorithm. Kylix is an even newer (2014) algorithm, again by the same two authors, that further optimizes the butterfly approach by varying the degree of each butterfly node (the number of butterfly nodes each butterfly node must communicate with) in a way that is optimized for real-life power-law data distributions. Finally, part of their overall approach is what they have coined "rooflining", which is where they compute the theoretical maximum communication and computation bandwidth, say of a GPU, and ensuring that their measured performance comes close to it. In their dissertation, they show they reach 80-90% of CPU/GPU theoretical maximums. By doing so, the authors have turned GPU hype into reality, and have implemented numerous machine learning algorithms using their BIDMach framework. Now it remains to either make BIDMach available for commercial production use, or to incorporate the concepts into an existing cluster framework like Spark. Posted by Michael Malak at 4:32 PM 3 comments: Labels: BIDMach, GPU, Spark Parallel vs. Distributed file systems: Time for RAID on Hadoop? The long-standing wisdom is that RAID is not beneficial for Hadoop data nodes. This wisdom is traced back to the venerable Hadoop: The Definitive Guide, which cites a 2009 Apache forum posting from Yahoo! engineer Runping Qi reporting experimental results showing JBOD to be faster than RAID-0. The reasons cited in the Hadoop book are: HDFS has redundancy anyway, and RAID-0 slows down the entire array to match the speed of the slowest drive in the array While the 2009 experimental results are compelling (at least for 2009), these two stated reasons are not. We can look toward "parallel" file systems from the world of High Performance Computing (HPC) for inspiration. The paradigm in HPC is to separate compute from storage, but to have a really fast network, but more importantly to have a "parallel file system". A parallel file system aggregates the bandwidth from many storage nodes to feed a compute node. While Hadoop was able to achieve its performance through its clever insight of shipping code to data, each CPU in a Hadoop cluster has to suck its data from a single disk through a straw. The limiting factor for both HPC and Hadoop is the slow transfer rate (1 Gbps) out of a hard drive. HPC addresses this bottleneck by: striping data across nodes, storing data across nodes in a round-robin fashion, rather than the more random approach that Hadoop takes using high-bandwidth links in the cluster (e.g. 40 Gbps Infiniband vs. 1 Gbps or 10 Gbps Ethernet using network DMA (Infiniband) instead of a heavy software stack (Ethernet) In particular, a 2011 comparison between Lustre and HDFS cited lack of striping in HDFS as a reason for reduced HDFS performance. There have been a couple chinks in the armor of the "No RAID for HDFS" received wisdom in the past couple of years. The book Pro Apache Hadoop, Second Edition, just published this month, provides one specific exception to the rule: Some Hadoop systems can drop the replication factor to 2. One example is Hadoop running on the EMC Isilon hardware. The underlying rationale is that the hardware uses RAID 5, which provides a built-in redundancy, enabling a drop in replication factor. Dropping the replication factor has obvious benefits because it enables faster I/O performance (writing 1 replica less). Another is Hortonworks in 2012, which gives credence to the idea of using RAID-0, but at most only pairs of disks at a time. It seems that we could have the best of both worlds if each node in a Hadoop cluster had parallel I/O across many disks, such as can be provided by RAID-0. As for the concern that RAID-0 is slowed to the speed of the slowest drive, well, the same is true of PVFS. So should RAID-0 be used in Hadoop data nodes to speed up I/O to the CPU? Probably not, and here's why. CPUs for the past decade have plateaued on clock speed and have instead been adding cores. And there is a recommendation that there be a 1:1 ratio between "spindles to cores". For the purposes of I/O, multiple hard drives joined in RAID-0 would be considered a single spindle. So one could imagine a single 12-core CPU connected to 12 RAID-0 pairs, for a total of 24 drives. But as core count goes up over the upcoming years, and if dual- and quad-CPU motherboards are considered instead, this scenario becomes the exception. Posted by Michael Malak at 4:27 PM No comments: Labels: BigData Data Locality: HPC vs. Hadoop vs. Spark Diagram Notes: 1. Yellow documents are map outputs 2. Not shown is that Hadoop spools map outputs to disk before reduce task reads them, whereas Spark keeps the map outputs in RDDs. The big advance Hadoop brought over classic High Performance Computing (HPC) is data locality. Hadoop brings the compute to the data. (HPC compensates by having faster interconnects such as Infiniband and high-bandwidth storage.) The big advance Spark brought over Hadoop is storing data in each node's RAM instead of each node's disk. Spark's leveraging of data locality is very similar to that of Hadoop's: namely, computation is assigned to occur where the data resides. Except Spark 1.2 is set to improve that a bit. In a just published paper, AMP Lab contributor Shivaram Venkataraman et al propose assigning the reduce task to the node that happens to have the largest map output, thus minimizing data movement. This advance is currently slated for Spark 1.2, in Jira ticket SPARK-2774 There are other advances described in the Venkataraman et al paper, namely, when sampling subsets of data such as BlinkDB does, Spark could greedily take whatever data happens to be present on nodes with available compute, and call that the sample. There is no set Spark release for that feature, which the paper calls KMN. Labels: BigData, HPC, Spark Four Reasons for Immutable HDFS Archive Two years ago, when I first joined Michael Walker's Data Science & Business Analytics Meetup, the form asked (and still asks) "What important truth do very few people agree with you on?" My answer was "Data should never be deleted". At the time, I had no idea what Data Science was and had barely been introduced to Big Data, but it was a dictum I lived by, much to the consternation of my bosses over the past two decades when it came time to approve purchases of hard drives. Well, I may have to update my profile, because it seems more and more people are agreeing with me. As I blogged on the January, 2014 Boulder/Denver Big Data Meetup, the discussion format came to a consensus that all ingested data should be kept intact as-is as an immutable data store, and that processed data should be stored in some kind of data warehouse for the actual analytics. I wrote then that it was good to have that pattern, which was in the making for a couple of years, finally codified as a pattern. It's even more solidified now. The two most common motivations given are: 1. Bugs You might discover a bug in your processing code, and so you may need to reprocess all the original data with the corrected code. 2. New Derived Metric You might discover you need to track clicks per second rather than just clicks per minute. With the original data still around, it becomes possible to resummarize the raw data. Two Other Reasons But here are two other reasons, not usually stated when this pattern is presented: 3. New Data Enrichment Suppose in your summarized data you don't store social security number even though it exists in the original data. Then your company just obtained the services of data provider, and you're now able to get household income based on social security number. Now you can append this data as another column in the analytics database. 4. Reapply Machine Learning to Bigger Data Set This is perhaps the most important reason of all, due to the The Unreasonable Effectiveness of Data. As more data becomes available over time from the original data streaming source, machine-learned models can be improved. Labels: BigData, Streaming Semantic Similarity Metrics Data Science is more than just statistics and machine learning on numbers. A lot of data is "unstructured," which means text (or worse, both text and numbers). While natural language processing has been around for half a century, its importance in the fields of Big Data and Data Science is growing and can no longer be ignored if one is to maintain competitive advantage. There is a planet full of tools, and herein I describe one grain of sand out of that planet: Semantic Similarity Metrics. Given a document of text (e.g. a Facebook posting or an e-mail), we can turn it into a set of words or a bag of words. A bag of words is like a set of words, except it also includes the multiplicity. E.g. the miniature document "Now, come now" represented as a set of words would be {"now", "come"} whereas as a bag of words would be Sets of words and bags of words can alternatively be considered as Boolean vectors and numeric vectors, respectively. A common need when processing documents is to evaluate their similarity, e.g. to determine if they are duplicates, or to determine how close a sample document might be to a "reference" document (e.g. for automated essay scoring). There are various similarity metrics available, for both Boolean and numeric vectors. Similarity Metrics for Boolean Vectors Recall that what we mean by "Boolean Vectors" are really just sets, and it is easier to think about and discuss these as sets rather than as literal Boolean vectors, so we use set notation. The Jaccard Index is the simplest metric: \[\frac{\left|A \cap B\right|}{\left|A \cup B\right|}\] Dice-Sørensen The Dice-Sørensen (aka just Dice or just Sørensen) is similar to the Jaccard. \[\frac{2\left|A \cap B\right|}{\left|A\right| + \left|B\right|}\] They both give scores in the range [0,1]. But Dice emphasizes similarity, especially in the cases where one set is larger than the other. However, Dice does not satisfy the triangle inequality and thus is not a true metric in the mathematical sense of the word. Tversky Tversky is a generalization of Jaccard and Dice, in that Jaccard and Dice become just special cases of Tversky: \[\frac{2\left|A \cap B\right|}{\left|A \cap B\right| + \alpha\left|A-B\right| + \beta\left|B-A\right|}\] We arrive at Jaccard with \(\alpha=\beta=1\) and at Dice with \(\alpha=\beta=0.5\). But by varying \(\alpha\) and \(\beta\) to be different from each other, we can apply Tversky to situations where we wish to treat documents asymmetrically. For example, if instead of documents A and B that are treated equally, we have a reference set R (perhaps some sort of answer key) and a user set U, then by setting \(\alpha\) high we can "punish" the user for missing words that were expected in R. Alternatively, we could set \(\beta\) high to "punish" the scoring for not finding the best R that best matches the user input U. Similarity Metrics for Numeric Vectors Instead of having seta A and B, we now consider numeric vectors X and Y, which are frequency counts in our bag of words. Tanimoto The Tanimoto metric is the numeric vector generalization of the Jaccard index for Boolean vectors: \[\frac{X \cdot Y}{\left|X\right|^2 + \left|Y\right|^2 - X \cdot Y}\] Here, the dot represents the vector dot product. Cosine The cosine similarity metric is similar in appearance to Tanimoto: \[\frac{X \cdot Y}{\left|X\right| \left|Y\right|}\] The cosine has the appealing property that 0 means a 90 degree separation, or complete orthogonality. Labels: MachineLearning SSD to the rescue of Peak Hard Drive? A couple of months ago, I blogged about Peak Hard Drive, that hard drive capacities were leveling off and how this would impact the footprints of data centers in the era of Big Data. Since then, there have been two major announcements about SSDs that indicate they may come to the rescue: SanDisk announced 4TB SSD "this year" and 16TB possibly next year. Given that such technologies are typically delayed by one calendar year from their press releases, in the above chart, I've indicated those as becoming available in 2015 and 2016, respectively. Japanese researches develoepd a technique to improve SSD performance by up to 300% The 16TB in 2016 is phenomenal and would be four years sooner than the 20TB in 2020 predicted by Seagate. Much more than that, if the 16TB SSD will be in the same form factor as its announced 4TB little brother, then it will be just a 2.5" drive in contrast to the presumed 3.5" form factor for the 20TB Seagate HAMR drive. As you can see in the chart above, the 16TB puts us back on track of the dashed gray line, which represents the storage capacity steady growth we enjoyed from 2004 to 2011. Photo by Paul R. Potts in the Wikimedia Commons. It is because of the varying form factors that in my blog post two months ago I adopted the novel "Bytes/Liter" metric, which is a volumetric measure in contrast to the more typical "aerial" metric that applies to spinning platters but not to SSDs. (Actually I changed the metric from log10(KB/Liter) from two months ago to log10(Bytes/Liter) now, reasoning that Bytes is a more fundamental unit than KB, that it eliminates the KB vs. KiB ambiguity, and that it makes the chart above easier to read where you can just pick out the MB, GB, TB, PB by factors of 3 of the exponent of 10.) This volumetric metric can handle everything from the 5.25" full-height hard drives of the 1980's to the varying heights of 2.5" hard drives and allow us to linearly extrapolate on the logarithm chart above. The direct overlay of the SSD line over the HDD line for the years 1999-2014 came as a complete shock to me. SSDs and HDDs have vastly different performance, form factor, price and performance characterstics. Yet when it comes to this novel metric of volumetric density, they've been identical for the past 15 years! Photo from tomshardware.com comparing 9.5mm height 2.5" drive to 15mm Now, the announced 4TB 2.5" SSD and presumably also the 16TB SSD are not of the typical notebook hard drive form factor. The typical notebook hard drive is 9.5mm tall, whereas these high-capacity SSDs are 15mm tall. They're intended for data center use, such as in the 2U rack below. The configuration in the 2U chassis above is typical for 2.5" drives: just 24 drives, because they are all accessible from the front panel. I'm not aware of any high-density solutions for 2.5" drives such as those that exist for 3.5" drives, such as the one below that puts 45 drives into 4U. In time, there should be some higher density rackmount solutions for 2.5" drives appearing, but for now, today's available solutions don't take full advantage of the compactness of 2.5" SSDs portrayed in the above chart, which measures volumtric density of the drive units themselves and not the chassis in which they reside. Also not clear is whether the 16TB SSD will be MLC or TLC. The 4TB drive is MLC, which means two bits per cell. If the 16TB drive is TLC, then three bits are stored in each cell (eight different voltage levels detected per cell), which can reduce lifespan by a factor of 3 and for that reason are often not considered for enterprise data center use. For the moment, we're stuck at the inflection point in the above chart at 2014, wondering which dotted line data centers will be able to take in the future. Due to a combination of increased use of VMs in data centers and increased physical server density, projections were that we had reached peak physical square footage for data centers: that no more data centers would have to be built, ever (aside from technology improvements such as cooling and energy efficiency). The slide above is from SSE. My blog on Peak Hard Drive threatened to blow that away and require more data centers to be built due to plateauing hard drive density combined with exploding Big Data use. But now with the two SSD announcements, we might be -- just maybe -- back on track for no more net data center square footage. Labels: BigData, Storage Apache Spark 1.0 almost here. Is it ready with 16 "unresolved blockers" in Jira? Apache Spark 1.0 is to be released any day now; currently "release candidate 6 (rc6)" is being evaluated and will be voted upon imminently. But is it ready? There are currently 16 issues marked as "unresolved blockers" in Jira for Spark, at least one of which is known to produce erroneous data results. Then there is the state of the REPL, the interactive Spark Shell recently lauded for making Spark accessible to data scientists, as opposed to just hard-core software developers. Because the Spark Shell wraps every user-entered command and class to do its interactive magic, some basic Spark functions fail to operate, such as lookup() and anything requiring equals() on a compound key (i.e. custom Scala class as opposed to just using String or Int for a key) for groupByKey() and other combineByKey() derivatives. It even affects map(), the most fundamental of all functional programming operations. Even putting the REPL aside and considering just writing full-fledged Scala programs, the native language of Spark, simple combinations such as map() and lookup() throw exceptions. Don't get me wrong. Spark is a great platform, and is where it should be after two years of open source development. It's the "1.0" badge that I object to. It feels more like a 0.9.2 release. Labels: Spark GeoSparkGrams: Tiny histograms on map with IPython Notebook and d3.js Daily variation of barometric pressure (maximum minus minimum for each day) in inches, for the past 12 months. For each of the hand-picked major cities, the 365 daily ranges for that city are histogrammed. Here "spark" is in reference to sparklines, not Apache Spark. Last year I showed tiny histograms, which I coined as SparkGrams, inside an HTML5-based spreadsheet using the Yahoo! YUI3 Javascript library. At the end of the row or column, a tiny histogram inside a single spreadsheet cell showed at a glance the distribution of data within that row or column. This time, I'm placing SparkGrams on a map of the United States, so I call these GeoSparkGrams. This time I'm using IPython Notebook and d3.js. The notebook also automatically performs the data download from NOAA. The motivation behind this analysis is to find the best place to live in the U.S. for those sensitive to barometric volatility. The above notebook requires IPython Notebook 2.0, which was released on April 1, 2014, for its new inline HTML capability and ease of integrating d3.js. Labels: D3, iPython, Visualization Matplotlib histogram plot from Numpy histogram data Of course Pandas provides a quick and easy histogram plot, but if you're fine-tuning your histogram data generation in NumPy, it may not be obvious how to plot it. It can be done in one line: hist = numpy.histogram(df.ix[df["Gender"]=="Male","Population"],range=(50,90)) pandas.DataFrame({'x':hist[1][1:],'y':hist[0]}).plot(x='x',kind='bar') Labels: iPython Peak Hard Drive This past week, Seagate finally announced a 6TB hard drive, which is three years after their 4TB hard drive. Of course, Hitachi announced their hermetically-sealed helium 6TB hard drives in November, 2013, but only to OEM and cloud customers, not for retail sale. Hard drive capacities are slowing down as shown in the chart below. To account for the shrinking form factors in the earlier part of the history, and to account for exponential growth, I've scaled the vertical axis to be the logarithm of kilobytes (1000 bytes) per liter. This three year drought on hard drive capacity increases is represented by the plateau between the last two blue dots in the graph, representing 2011 and 2014. The red line extension to 2020 is based on Seagate's prediction that by then they will have 20TB drives using HAMR technology, which uses a combination of laser and magnetism. However, if the trendline from 2004-2011 had continued, by linear extrapolation on this log scale, hard drives would have been 600TB by 2020. This is not good news for users of Big Data. Data sizes (and variety and number of sources) are continuing to grow, but hard drive sizes are leveling off. Horizontal scaling is no longer going to be an option; the days of the monolithic RDBMS are numbered. Worse, data center sizes and energy consumption will increase proportional to growth in data size rather than be tempered by advances in hard drive capacities as we had become accustomed to. We haven't reached an absolute peak in hard drive capacity, so the term "peak hard drive" is an exaggeration in absolute terms, but relative to corporate data set sizes, I'm guessing we did reach peak hard drive a couple of years ago. QED: Controlling for Confounders We see it all the time when reading scientific papers, "controlling for confounding variables," but how do they do it? The term "quasi-experimental design" is unknown even to many who today call themselves "data scientists." College curricula exacerbate the matter by dictating that probability be learned before statistics, yet this simple concept from statistics requires no probability background, and would help many to understand and produce scientific and data science results. As discussed previously, a controlled randomized experiment from scratch is the "gold standard". The reason is because if there are confounding variables, individual members of the population expressing those variables are randomly distributed and by the law of large numbers those effects cancel each other out. Most of the time, though, we do not have the budget or time to conduct a unique experiment for each question we want to investigate. Instead, more typically, we're handed a data set and asked to go and find "actionable insights". This lands us into the realm of quasi-experimental design (QED). In QED, we can't randomly assign members of the population and then apply or not apply "treatments". (Even in data science when analyzing e.g. server logs, the terminology from the hard sciences holds over: what we might call an "input variable" is instead called the "treatment" (as if medicine were being given to a patient) and what we might call an "output variable" is instead called the "outcome" (did the medicine work?).) In QED, stuff has already happened and all we have is the data. In QED, to overcome the hurdle of non-random assignment, we perform "matching" as shown below. The first step is to segregate the entire population into "treated" and "untreated". In the example below, the question we are trying to answer is whether Elbonians are less likely to buy. So living in Elbonia (perhaps determined by a MaxMind reverse-IP lookup) is the "treatment", not living in Elbonia is "untreated", and whether or not a sale was made is the "outcome". We have two confounding variables, browser type and OS, and in QED that is what we match on. In this way, we are simulating the question, "all else being equal, does living in Elbonia lead to a less likely sale?" In this process, typically when a match is made between one member of the treated population and one member of the untreated population, both are thrown out, and then the next attempt at a match is made. Now as you can imagine, there are all sorts of algorithms and approaches for what constitutes match (how close a match is required?), the order in which matches are taken, and how the results are finally analyzed. For further study, take a look at the book Experimental and Quasi-Experimental Designs for Generalized Causal Inference. Labels: DataScience Quick Way to Play With Spark If you're interested in a quick way to start playing with Apache Spark without having to pay for cloud resources, and without having to go through the trouble of installing Hadoop at home, you can leverage the pre-installed Hadoop VM that Cloudera makes freely available to download. Below are the steps. Because the VM is 64-bit, your computer must be configured to run 64-bit VM's. This is usually the default for computers made since 2012, but for computers made between 2006 and 2011, you will probably have to enable it in the BIOS settings. Install https://www.virtualbox.org/wiki/Downloads (I use VirtualBox since it's more free than VMWare Player.) Download and unzip the 2GB QuickStart VM for VirtualBox from Cloudera. Launch VirtualBox and from its drop-down menu select File->Import Appliance Click the Start icon to launch the VM. From the VM Window's drop-down menu, select Devices->Shared Clipboard->Bidirectional From the CentOS drop-down menu, select System->Shutdown->Restart. I have found this to be necessary to get HDFS to start working the first time on this particular VM. The VM comes with OpenJDK 1.6, but Spark and Scala need Oracle JDK 1.7, which is also supported by Cloudera 4.4. From within CentOS, launch Firefox and navigate to http://www.oracle.com/technetwork/java/javase/downloads/jdk7-downloads-1880260.html. Click the radio button "Accept License Agreement" and click to download jdk-7u51-linux-x64.rpm (64-bit RPM), opting to "save" rather than "open" it. I.e., save it to ~/Downloads. From the CentOS drop-down menu, select Application->System Tools->Terminal and then: sudo rpm -Uivh ~/Downloads/jdk-7u51-linux-x64.rpm echo "export JAVA_HOME=/usr/java/latest" >>~/.bashrc echo "export PATH=\$JAVA_HOME/bin:\$PATH" >>~/.bashrc wget http://d3kbcqa49mib13.cloudfront.net/spark-0.9.0-incubating.tgz tar xzvf spark-0.9.0-incubating.tgz cd spark-0.9.0-incubating SPARK_HADOOP_VERSION=2.2.0 sbt/sbt assembly bin/spark-shell That sbt assembly command also has the nice side-effect of installing scala and sbt for you, so you can start writing scala code to use Spark instead of just using the Spark Shell. Visualization of How Real-time Cardinality Algorithms Work In a previous blog post, Real-time data science, I textually described an algorithm that can be used, for example, in real-time data streaming applications to estimate the size (cardinality) of a set. I don't think a description can convey how it works as well as a visualization, so I created an iPython Notebook. Below are the images from that Notebook, but you'll need to click on the Notebook link if you want to see how the images were generated and the detailed description. As I described in the previous blog post, the motivation behind trying to count items in a data stream might be, for example, doing the equivalent of a SQL COUNT DISTINCT on a website's clickstream to get the number of unique visitors to that website. Rather than maintaining in memory a list of unique ID's (say, IP addresses), which can consume a lot of memory for a popular website, instead we rely on a hash function and just keep track of the minimum hash value ever seen. That's right, instead of saving n IP addresses (where n might be a million for a very popular website), we just save one data value, the smallest hash value ever seen. The scatter plot below shows the result. For the top dotted row, 10 random UUID's were hashed to floating point values in the range [0,1). For the middle row, 100 UUID's and for the bottom row, 1000 UUID's. Then we just count the number of leading zeroes of the smallest hash value by using the log10 function. To get the cardinality we just then raise 10 to the power of that number of leading zeros. And zooming in to the left portion of the plot: The larger the set, the greater the chance that a hash value will happen to end up closer to zero (left edge of the plot). To get the size of the set, we just count the number of leading zeros, and then raise 10 to the power of that number. That leaves a lot to chance, of course, so there are ways to reduce the effects of randomness and improve the accuracy. One straightforward way, described in my iPython Notebook linked above, is to split the set into, say, 10 subsets, apply the algorithm to each subset independently, average the results somehow, and multiply that average by 10 to get the cardinality of the whole original set. So in terms of memory utilization, this approach requires the space of 10 floating point numbers instead of just 1. In my iPython Notebook, I just do a simple straight average, and it does improve the accuracy significantly, but it's still off by a factor of 3 in the particular case I tried. A better way to average is a geometric mean. And best still is the HyperLogLog algorithm which employs something called the harmonic mean. The bright folks over at Metamarkets, developers of Druid, ran some experiments two years ago and showed that HyperLogLog produces cardinalities that are 98% accurate. Labels: Streaming Corrgram: Multi-variate visualization Corrgrams, invented and coined by Michael Friendly in his 2002 American Statistician paper are a powerful and rapid way to visualize a dozen or more dimensions simultaneously when in the exploratory phase of multi-variate analysis. (Note that Corrgrams are sometimes erroneously referred to as Correlograms, which are something completely different for time series analysis.) The visualization below is an example generated by the R package corrgram on the Lending Club peer-to-peer lending data that was part of the homework assignment for the Coursera class I took a year ago, Data Analysis. In the visualization above, brightness (more properly, saturation) of red indicates negative correlation and brightness (saturation) of blue indicates positive correlation, meaning weakly correlated dimensions appear grayish. The bright red box that jumps out is that FICO score is strongly negatively correlated to interest rates. This highlights two unsurprising points: 1) The higher the FICO score, the lower the interest rate, and 2) FICO score has the strongest influence (by eyeball comparison to all the faded blue squares) on interest rate. We also see that things like loan length, debt-to-income ratio and number of open credit lines increase interest rate, with loan length being the strongest of those secondary influences. But there's more we can pull out of this visualization. Notice that number of inquiries in the last six months (which means the number of inquiries on one's credit report with the FICO scoring agencies coming from all loan or credit applications, not just those from Lending Club) is a strong influence on Lending Club interest rate. But the correlation between number of inquiries in the last six months and FICO, while a negative correlation as expected, is only a weak negatively correlation from its very pale rose color. That suggests that perhaps Lending Club lenders more strongly dislike (and thus penalize) borrowers with a lot of credit inquiries than do conventional lenders. It suggests perhaps that Lending Club lenders dislike (disproportionately so relative to conventional lenders, or at least the FICO scoring system itself) being the "lender of last resort" and assign a higher risk and thus higher interest rate to such situations. This quilt of colors can't tell us all this for certain -- neither numerically in statistics nor certainly in terms of causality -- but it quickly points us onto paths of investigation that could lead to verifying such unanticipated insights. Now, the 12 dimensions in the above visualization push the envelope of what is practical with corrgrams, whereas data sets in real life often have hundreds of dimensions. In multi-variate analysis, one way to reduce the number of dimensions is to perform a random forest followed by a variable importance plot. While random forest has a reputation of being opaque, one can still easily obtain the list of variables chosen as top nodes most often. From that list, simply pick the first dozen or so and plug them into a corrgram to visualize the interactions amongst the most important deciding variables. This can be improved further through iteration: if two variables, such as, hypothetically, "Average bank balance for past 3 months" and "Average bank balance for past 6 months" are shown to be strongly correlated, you can discard one of those in the corrgram and use that valuable corrgram slot for a different variable. Labels: DataScience, Visualization Science Data Science We commonly hear about "data science" in the context of mining marketing or business data, especially "Big Data", but of course scientists have been practicing statistics for centuries. But recall data science is more than just statistics, from Drew Conway's famous Venn Diagram: For scientists to move out of "traditional research" and into data science, they need to add computer-based skills such as machine learning and big data. Just data management has long been a problem in the scientific realm. We learned last month that 80% of scientific data from science conducted over the past 20 years is already lost due to poor data retention policies. In recognition of that, U.S. government funding grants now require a data retention plan to be included in funding proposals. And just two days ago, IEEE Spectrum posted a blog article on Gordon Moore's new law, that "big data will lead to big science," and his philanthropic efforts to support that. But why is scientific data retention so poor? Having worked in the scientific software development field for half my career, I can speculate on several reasons: Scientific data is not amenable to conventional (e.g. relational) databases. Scientific data sets are typically array-based (2D, 3D, 4D, and higher), where the array indices rather than relational metadata describe the data. This is a fancy way of saying a bunch of flat unannotated binary files, but there are reasons scientists use such files: they are compact relative to, say, XML and JSON; they are easy to write software to read and write; and they are not tied to a particular software vendor. With their ease of use, though, comes ease of deletion. Corporate and institutional cultures pay no heed when files get deleted, but try and delete a database and suddenly the resistance increases dramatically. Along with the convenience of binary files preferred by scientists comes the disrespect of files. Until this focus over the past 3-5 years on data retention, funding proposals never included funding for prolonged data retention. Data retention is expensive. Data formats change, from 8" floppies, to 5.25" floppies, to Bernoulli drives, to 3.5" floppies, to Zip drives, to Jaz drives, to QIC tapes, to CD-ROM, to DVD-R, to LTO tapes, to USB drives, not to mention data on raw hard drives: MFM, RLL, SCSI, Ultra SCSI, IDE, SATA, SAS. It takes both labor and capital investment to continually propagate data from one format to another. Not only must data be format shifted, even within a single format it must be refreshed to guard against physical or magenetic decay. Properly maintained data also involves maintaining multiple backups, including at off-site locations. Compounding the issue is that scientific data would retain its value even more than business data. Longitudinal studies of humans or civil engineering edifices can leverage data spanning a century or more. New scientific studies frequently make use of old data sets, applying new techniques or new insights, when such data is available (or, perhaps if it were available). Scientists are not experts in computers, let alone IT "best practices". Scientists typically know enough about computers to get by, but have not yet generally added that third bubble from the Venn diagram. Scientists often have to rely on proprietary and commercial software and systems for data collection. These systems are specialized, and have no open source counterparts due to the lack of economic forces that propel open source software in the realm of business software. Scientific software even often comes tethered to dongles, or works only on proprietary operating systems no longer available (such as MS-DOS). I have even blogged and presented on an alternative to all this, XML/XSL/HTML5 for reports instead of PDF, where I suggest visualization and presentation software programs be encoded in the form of open-source Javascript instead of closed-source proprietary and commercial binaries, but I know of no uptake outside of my singular implementation of the idea. Data retention is the critical first step to expanding data science in the scientific realm. Without data retention, there can be no statistics and machine learning over data sets that include data from the past or data from other researchers. I can imagine scientists universally adopting tools like R, iPython Notebook, and Weka like a fish to water, but without data, there is no water. Hypothesis formation Somewhat of a sequel to my earlier blog post on causality, where do hypotheses come from? The ideal hypothesis: Has basis in a reasonable engineering, physical, or economic, etc. model. Is as simple as can be in terms of number of variables. I.e. Occam's Razor has been applied. Either has been vetted against a number of other hypotheses and selected as the most reasonable, or will be tested along with other reasonable hypotheses. Will be tested in the gold standard, the randomized controlled experiment. Is actionable. Real life is not ideal, so below I discuss compromises and trade-offs involved in hypothesis formation. Basis in a Model As discussed in my causality blog entry, the only way to assign causality is to develop a rational model about how things really work, not just from the output of some multivariate correlation done in R. The best hypotheses are rooted in causation, though it is of course possible to hypothesize anything conjecture at all, including from statistical correlations discovered during data exploration. Discovery from data as a source of hypothesis is better than pulling from thin air, but hypothesizing from a model is best of all. Hypothesizing from data is called induction and hypothesizing from a model is called deduction. The fewer the variables, the stronger the hypothesis and the more robust it is, by which it is meant the more likely it will hold up to a variety of conditions. E.g., suppose we induce a hypothesis from data exploration that teenage girls that use Twitter like Justin Beiber. A stronger hypothesis (if it turns out to be true) would be to get rid of the Twitter condition, not only because it broadens the potential market for Justin Beiber products, but also because it is more resilient in varied circumstances, such as perhaps a time when (assuming some sort of unlikely calamity befalls Twitter) Twitter is no longer popular and something takes its place. Vetted Against Competing Hypotheses When forming a hypothesis, it is important to brainstorm as many different plausable hypotheses as possible, from a variety of sources: As with conventional brainstorming, ask fellow team members and associates for their creative hypotheses. Formulate as complete a model as possible, and from that model identify explanations. E.g., when modeling a consumer: What is the consumer's budget? What is the pay schedule of the consumer? Are there upcoming holidays that would either enhance purchases (in anticipation) or hinder them (due to store or bank closures)? What products complement the products the consumer already owns? What products would enhance the social standing of the consumer? Does the consumer carry credit cards that are accepted? Is the consumer a student? The model doesn't have to be complete and fully accurate -- just enough to spark brainstorming. I.e., it's not necessary to create a Bayesian Belief Network or Root-Cause Analysis Fishbone just to hypothesize. Identify leading hypotheses and test them. This is easier said than done. "Identifying" is a nice way of saying "hunch," because the alternative, "test," is very expensive if done by the gold standard, the controlled randomized experiment. And by so "identifying a leading hypothesis," one becomes subject to the cherry-picking I discussed in the Panel on "Resolved: Traditional Statistics is Dead". It's nice to pick the best hypothesis from a bunch, but to ensure you don't stumble into a spurious correlation, it's ideally necessary to test all similar hypotheses. In the example proctored in the forum, there turned out to be a correlation between Superbowls and presidential elections. Aside from the obvious modeling deficiencies, my response was whether correlations between MLB penants or NHL cups and presidential elections had also been tested. However, the alternative to picking good hypotheses is to leave it to chance, which is not productive. So pick good hypotheses, but beware of spurious correlations, especially if your hypothesis came from induction from the data rather than deduction from a model. Controlled Randomized Experiment Controlled randomized experiments are the gold standard, but they are expensive and time consuming. It is much more convenient and quicker to find and test correlations in existing data sets, but such correlations are fraught with problems: population not random throughout independent variables of the new hypothesis, limited data for train vs. test that effectively lead to test data becoming training data, experimental conditions being different, etc. But from a practical standpoint, "quasi-experiments" (experiments from an existing data set) are the general rule encountered in practice and "experiments" are, realistically, the exception. Compensating for the shortcomings of quasi-experiments will be the subject of a future article. You can have the most interesting, perhaps even insightful, hypothesis, but if there is no reasonable course of action to take once it is proven, it's a waste of time to prove it. Good hypothesis formation: Avoids wasting time testing bad hypotheses Saves time that can be redirected toward testing the best hypotheses, including testing hypothesis adjacent to the leading hypotheses to avoid spurious correlations Results in more resilient, more actionable insights. Single GPU-Powered Node 4x Faster Than 50-node Spa... Parallel vs. Distributed file systems: Time for RA... Apache Spark 1.0 almost here. Is it ready with 16 ... GeoSparkGrams: Tiny histograms on map with IPython... Visualization of How Real-time Cardinality Algorit...
CommonCrawl
Open Peer Review Synthetic and natural antioxidants attenuate cisplatin-induced vomiting Javaid Alam1, Fazal Subhan1Email authorView ORCID ID profile, Ihsan Ullah2, Muhammad Shahid1, Gowhar Ali1 and Robert D. E. Sewell3 BMC Pharmacology and ToxicologyBMC series – open, inclusive and trusted201718:4 Received: 6 June 2016 Accepted: 13 December 2016 The Erratum to this article has been published in BMC Pharmacology and Toxicology 2017 18:9 Open Peer Review reports Synthetic and natural antioxidants including Bacopa monnieri (L.) Pennell (Scrophulariaceae) which also possess anti-dopaminergic properties, have been proposed to be useful for emetogenic chemotherapy. In this study, synthetic [N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C)] and natural [grape seed proanthocyanidin (GP), B. monnieri n-butanolic fraction (BM-ButFr)] antioxidants and their combinations were evaluated against cisplatin-induced emesis in pigeons during a 24 h observation period. Emesis was induced using cisplatin (7.0 mg/kg, i.v). MPG (10, 20, 30 mg/kg), Vit-C (100, 200, 300 mg/kg), GP (50, 100, 150 mg/kg) and BM-ButFr (5, 10, 20 mg/kg) and their combinations were administered i.m., 15 min before cisplatin administration. The number of vomiting bouts, retching, emetic latency and % weight loss were recorded to assess antiemetic potential. Antioxidant activity was evaluated by the DPPH free radical scavenging assay (FRSA). Significant attenuation of vomiting bouts, retching, % weight loss along with an increase in latency was produced by all the antioxidants and their combinations compared to cisplatin alone and this is the first report of this activity of GP in pigeons. Low EC50 values in the FRSA for MPG (67.66 μg/mL), Vit-C (69.42 μg/mL), GP (6.498 μg/mL) and BM-ButFr (55.61 μg/mL) compared to BHT standard (98.17 μg/mL) demonstrated their radical scavenging capacity. Correlation between the antioxidant activity and antiemetic efficacy disclosed a high degree of correlation for the tested antioxidants. The selected synthetic and natural antioxidants and their combinations were able to attenuate cisplatin-induced vomiting, which correlated with their potent in vitro antioxidant activity. Grape seed proanthocyanidin N-(2-mercaptopropionyl) glycine Free radical scavenging assay Nausea and vomiting are the most distressing and commonly occurring side effects of chemotherapeutic agents [1]. Indeed, chemotherapy can induce both acute and delayed phases of nausea and vomiting [2]. In pigeons and piglets for instance, the acute phase lasts for 8–16 h while the delayed phase may endure for 48–58 h [3]. However, in humans, the acute and delayed phases persist for 24 and 7 days, respectively [4]. The mechanisms underlying emesis have been investigated in carnivores such as ferrets, dogs [5], and cats [6], insectivores like Suncus murinus (musk shrew) [7] and Cryptotis parva (least shrew) (Soricidae) [8, 9]. Similarly, birds, notably pigeons, also display clear-cut emetic responses to copper sulphate, glucagon, digoxin [10], theophylline [11] and amantadine [12]. Cisplatin is an effective chemotherapeutic agent indicated for the management of different malignancies including ovarian [13], head and neck [14], testicular and bladder carcinomas [15]. Its use is associated with many side effects, of which vomiting is distinctly the most distressing [16]. It decreases the plasma levels of various antioxidants [17] and also generates both oxidative and nitrosative stress [18, 19]. The oxidative stress component plays a significant role in cisplatin-induced side effects as well as various other complications [20]. Antioxidants are effective in reducing oxidative stress evoked by cisplatin [21] and they play a pivotal role in protecting against cisplatin elicited nephrotoxicity [22], hepatotoxicity [23] and ototoxicity [24]. A range of antioxidants including vitamin C, N-2-mercaptopropionyl glycine (MPG, also named tiopronin), glutathione and vitamin E are known to be effective in cisplatin emetogenesis [25]. In this context, the synthetic antioxidant MPG, reduces cisplatin and pyrogallol provoked vomiting in Suncus murinus [26]. Proanthocyanidin (GP) an antioxidant flavonoid [27], not only possesses neuroprotective activity in humans [28] and animals [29], but also initiates a reduction of pica behavior (eating of non-food substances as a model of simulated emesis) in rats [30]. Additionally, it is effective against cisplatin-induced nephrotoxicity and hepatotoxicity [31, 32], provides cardioprotection and enhances cognitive performance [33]. Similarly, Bacopa monnieri (L.) Pennell (Scrophulariaceae), a reputed nootropic plant and a rich source of bacosides has strong antioxidant [34] and neuroprotective properties [35]. Although previous studies advocate the effectiveness of antioxidants as antiemetics [25, 36], there is no report available in the literature showing any direct correlation between antiemetic propensity and antioxidant activity in the pigeon model for emetogenesis. The aim of this study therefore was threefold: firstly to determine any antiemetic activity of selected natural and synthetic antioxidants either alone or in combination. Secondly to evaluate their antioxidant potential and thirdly, to establish any possible correlation between their antioxidant and antiemetic activities. Pigeons of either sex (mix breed, Department of Pharmacy, University of Peshawar, Pakistan) weighing between 200–400 g were used. They were acclimatized 24 h before the start of the experiment and were maintained at 22–26 °C on a 12 h light-dark cycle. Food and water were provided ad libitum. The experiments were performed in accordance with the UK Animals (Scientific Procedures) Act 1986 and were approved by the Ethical Committee of the Department of Pharmacy, University of Peshawar (Reference No. 14/EC-12/Pharm). Chemicals and standards Cisplatin (Korea United Pharm. Inc. Korea), analytical grade methanol (Sigma-Aldrich, Switzerland), 2,2-diphenyl-1-picrylhydrazyl (DPPH; Sigma-Aldrich, Germany), vitamin C (Vit-C; Sigma-Aldrich, Germany), butylated hydroxytoluene (BHT; Sigma-Aldrich, Germany), grape seed proanthocyanidin extracts (GP; Shaanxi Run-time Biotechnology Development Co. Ltd, Xian, China), N-(2-mercaptopropionyl) glycine (MPG; Sigma-Aldrich, Germany), metoclopramide (GlaxoSmithKline). Preparation of n-butanolic fraction of B. monnieri extract Whole plant of B. monnieri was collected in November, 2010 from Rumalee stream near Quaid-e-Azam University, Islamabad, Pakistan. It was authenticated by Prof. Dr. Muhammad Ibrar of the Department of Botany, University of Peshawar and a specimen was deposited in the herbarium of the same Department with a voucher No 7421. The aerial parts were separated, shade dried and coarsely powdered. They were extracted with methanol in a Soxhlet apparatus and further fractionated to obtain the n-butanolic fraction (BM-ButFr) which is reported to be rich in bacosides [37]. Bacosides are the major active constituents of B. monnieri and they are considered to be responsible for its myriad pharmacological properties [38]. Antiemetic activity Preparation of drug solutions Cisplatin was dissolved in saline at 65–75 °C with continuous shaking and was cooled before administration. GP, MPG, Vit-C and BM-ButFr were dissolved in sterile normal saline by gentle agitation and sonicated until uniform solutions were obtained. These solutions were then immediately administered by the intramuscular route (i.m.). Induction of emesis The maximal (100%) emetic dose of cisplatin (7.0 mg/kg) was used as an emetic challenge [39]. It was administered intravenously via the brachial vein and pigeon behavior was recorded for 24 h by video recorder. Any response with or without oral expulsion of gastric contents was considered as one bout (vomiting episode) and if occurring after a gap of 1 min from another bout, they were considered as separate vomiting episodes with 2 up to 80 numbers of retching (emetic behaviors) [40]. The latency to first vomit, the number of bouts, retching and % weight loss were all recorded. Emesis was induced by administering cisplatin (7.0 mg/kg) intravenously. MPG, Vit-C, GP and BM-ButFr or their combination as well as metoclopramide used as the standard antiemetic agent, were administered as pretreatments 15 min i.m. prior to cisplatin administration. The different doses of the tested compounds were selected according to previous studies [25, 39, 41, 42]. The animals were divided into the following groups: Group I: Cisplatin control (7.0 mg/kg), n = 8. Group II: Metoclopramide (30 mg/kg), n = 8. Group III: MPG (10, 20, and 30 mg/kg), n = 8 each per dose. Group IV: Vit-C (100, 200, and 300 mg/kg), n = 8 each per dose. Group V: GP (50, 100, and 150 mg/kg), n = 8 each per dose. Group VI: BM-ButFr (5, 10, and 20 mg/kg), n = 8 each per dose. Group VII: Combination of MPG (10 mg/kg) plus Vit-C (200 mg/kg), n = 7. Group VIII: Combination of BM-ButFr (10 mg/kg) plus GP (100 mg/kg), n = 8. Group IX: Combination of GP (100 mg/kg) plus Vit-C (200 mg/kg), n = 8. The percentage reduction in the frequency of cisplatin-induced vomiting bouts was calculated as: $$ \%\ \mathrm{Reduction} = \left(1\ \hbox{--}\ \mathrm{mean}\ \mathrm{number}\ \mathrm{of}\ \mathrm{bouts}\ \mathrm{after}\ \mathrm{treatment}/\mathrm{mean}\ \mathrm{number}\ \mathrm{of}\ \mathrm{bouts}\ \mathrm{of}\ \mathrm{untreated}\ \mathrm{control}\right) \times 100 $$ DPPH (2,2-diphenyl-1-picrylhydrazyl) free radical scavenging activity in vitro The antioxidant activities of MPG, Vit-C, GP and BM-ButFr were evaluated by the DPPH free radical scavenging assay [43, 44]. Briefly, 2.0 mL of methanolic 0.1 mM DPPH free radical solution was added to 1.0 mL of different concentrations (1.0, 10, 30, 50, 100, 200, 500 μg/mL) of GP, MPG, Vit-C, BM-ButFr or standard (BHT: butylated hydroxytoluene) in methanol. The solutions were shaken thoroughly, incubated in the dark at ambient temperature for 30 min and absorbance was measured at 517 nm using a UV/Visible spectrophotometer (Lambda 25, PerkinElmer, USA). The % scavenging of DPPH free radicals was calculated as follows: $$ \%\ \mathrm{of}\ \mathrm{DPPH}\ \mathrm{free}\ \mathrm{radical}\ \mathrm{scavenging}\ \mathrm{activity} = \left[\left({\mathrm{A}}_{\mathrm{I}}\hbox{-} {\mathrm{A}}_{\mathrm{I}\mathrm{I}}/{\mathrm{A}}_1\right) \times 100\right] $$ Where AI is the absorbance of the control reaction and AII is the absorbance in the presence of sample. The EC50, defined as the concentration of antioxidant causing 50% loss of DPPH activity was calculated from the graph of absorbance versus respective concentrations using non-linear regression analysis. All experiments were performed in triplicate. Data were expressed as mean ± S.E.M (n = 7–8) and analyzed by one way ANOVA followed by Tukey's multiple comparison using GraphPad Prism 5 (GraphPad Software Inc. San Diego CA, USA). Correlation analysis of antioxidant activity versus antiemetic activity of each antioxidant was carried out using the Pearson's correlation and regression program in Minitab version 17.1.0 (Minitab Inc. State College, PA 16801 USA). Antiemetic activity of N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C), grape seed proanthocyanidin (GP), and B. monnieri n-butanolic fraction (BM-ButFr) as well as MPG + Vit-C, BM-ButFr + GP and GP + Vit-C combinations As shown in Fig. 1, cisplatin generated a consistently maintained number of vomiting bouts over a 24 h period. It also induced retching; weight loss (%) and a decreased latency to first vomit (Table 1). Highly significant reductions in the number of bouts were found with MPG [F (3,28) = 10.62, P < 0.0001] (10, 20 mg/kg; 0–8 h, P < 0.001) (Fig. 2a), Vit-C [F (3,28) = 9.985, P = 0.0001] (100–300 mg/kg; 0–4 h, P < 0.001) (Fig. 2b), GP [F (3,28) = 50.97, P < 0.0001] (50–150 mg/kg; 0–8 h, P < 0.001) (Fig. 2c) and BM-ButFr [F (3,28) = 48.28, P < 0.0001] (5–20 mg/kg; 0–8 h and 13–16 h, P < 0.001) (Fig. 2d). However, the reductions were less significant (P < 0.01) for MPG at 30 mg/kg (0–4 h; Fig. 2a), Vit-C at 200 mg/kg (5–8 h; Fig. 2b), GP at 150 mg/kg (13–24 h; Fig. 2c) and BM-ButFr at 5–20 mg/kg (17–20 h; Fig. 2d) as compared to the cisplatin control. In the case of the combinations [F (3,27) = 33.55, P < 0.0001], highly significant inhibition (P < 0.001) of vomiting bouts was observed up to 4 h after cisplatin administration. The GP (100 mg/kg) + Vit-C (200 mg/kg) combination maintained marked significance at 5–8 h and during 17–20 h (P < 0.001) whilst the other two combinations had fluctuating inhibitory activity of low statistical significance (see Fig. 2e). Cisplatin (7.0 mg/kg i.v) induced vomiting bouts in pigeons during a 24 h observation period. Each bar represents mean ± S.E.M (n = 8) Activity of N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C), grape seed proanthocyanidin (GP), B. monnieri n-butanolic fraction (BM-ButFr) and their combination against cisplatin induced vomiting during a 24 h observation period Dose and route Pigeons tested/vomited Latency (min) Retching Wt. loss (%) 7.0 mg/kg i.v. 50 ± 3.2 70.0 ± 2.30 561 ± 56.4 30 mg/kg i.m. 8 ± 0.8*** 417 ± 7.00*** 134 ± 12.5*** 12 ± 5.8*** 410 ± 162* 135 + 55.6*** 4 ± 1.9** 21 ± 4.9** 91.0 ± 11.5 226 + 80.9* 7 ± 1.2* 329 + 79.2 Vit-C 100 mg/kg i.m. 384 ± 23.5* 332 ± 52.4** 9.0 ± 0.9*** BM-ButFr 5 mg/kg i.m. MPG + Vit-C 10 mg/kg + 200 mg/kg i.m. BM-ButFr + GP 10 mg/kg + 100 mg i.m. GP + Vit-C 100 mg/kg + 200 mg/kg i.m. Values are expressed as mean ± S.E.M. * P < 0.05, ** P < 0.01, *** P < 0.001 compared to cisplatin control (ANOVA followed by Tukey's post hoc analysis) Antiemetic effect of N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C), grape seed proanthocyanidin (GP), and B. monnieri n-butanolic fraction (BM-ButFr). a MPG (10, 20, 30 mg/kg); b Vit-C (100, 200, 300 mg/kg); c GP (50, 100, 150 mg/kg); d BM-ButFr (5, 10, 20 mg/kg) and e their combinations on cisplatin (7 mg/kg) induced vomiting during a 24 h observation period. Each bar represents mean ± S.E.M (n = 7–8). * p < 0.05, ** p < 0.01 *** p < 0.001 compared to cisplatin control (ANOVA followed by Tukey's post hoc analysis) A 100% reduction in the frequency of cisplatin-induced vomiting bouts was observed with GP (100 and 150 mg/kg) and Vit-C (300 mg/kg). Adequate protection was also produced by MPG at 10 mg/kg (72–94%), 20 mg/kg (66–83%) and 30 mg/kg (35–60%); Vit-C at 100 mg/kg (28–78%) and 200 mg/kg (49–77%); GP at 50 mg/kg (63–97%) and BM-ButFr at 5 mg/kg (49–79%), 10 mg/kg (61–78%) and 20 mg/kg (67–84%) during the acute phase (0–12 h) of cisplatin-induced vomiting. Likewise, during the delayed phase (13–24 h), the antiemetic propensity was observed as 47–60% (10 mg/kg), 0–70% (20 mg/kg) and 3–49% (30 mg/kg) for MPG; 0–35% (100 mg/kg), 6–36% (200 mg/kg) and 8–35% (300 mg/kg) for Vit-C; 41–59% (50 mg/kg), 57–68% (100 mg/kg) and 73–88% (150 mg/kg) for GP and 65–74% (5 mg/kg), 57–70% (10 mg/kg) and 61–69% (20 mg/kg) for BM-ButFr. A highly effective percentage reduction of cisplatin-induced vomiting was afforded by the combinations of MPG (10 mg/kg) + Vit-C (200 mg/kg), BM-ButFr (10 mg/kg) + GP (100 mg/kg) and GP (100 mg/kg) + Vit-C (200 mg/kg) as 72–83%, 53–86% and 54–85% during the acute phase and 62–75%, 53–60% and 84–92% during the delayed phase, respectively (see Additional file 1). Extremely significant reductions in the number of retching episodes was noted for MPG [F (3,27) = 6.639, P = 0.0016] (10 mg/kg, P < 0.001), Vit-C [F (3,28) = 9.658, P = 0.0002] (300 mg/kg, P < 0.001) and GP [F (3,28) = 12.70, P < 0.0001] (100, 150 mg/kg, P < 0.001) while less significant decrements were observed with MPG at 20 mg/kg (P < 0.05), Vit-C at 100 mg/kg (P < 0.05) and 200 mg/kg (P < 0.01), GP at 50 mg/kg (P < 0.01) and BM-ButFr [F (3,28) = 4.799, P = 0.0080] at 20 mg/kg (P < 0.01). Moreover, the reductions in the % weight loss were very significant with BM-ButFr [F (3,28) = 10.12, P = 0.0001] at 10, 20 mg/kg (P < 0.001), GP [F (3,28) = 18.03, P < 0.0001] and Vit-C [F (3,28) = 20.98, P < 0.0001] at all tested doses (P < 0.001). However, the decreases were less significant for MPG [F (3,28) = 4.938, P = 0.0071] at 10 mg/kg (P < 0.01), 20 mg/kg (P < 0.05) and 30 mg/kg (P < 0.05) and with BM-ButFr at 5 mg/kg (P < 0.01). Furthermore, significant increases in vomiting latency were observed with GP [F (3,28) = 23.93, P < 0.0001] at all doses (P < 0.001) but the increases were less significant for MPG [F (3,28) = 4.170, P = 0.0146] (10 mg/kg, P < 0.05) and Vit-C [F (3,28) = 5.048, P = 0.0064] (200, 300 mg/kg, P < 0.05) (Table 1). A tendency towards an increase in vomiting latency was observed with BM-ButFr [F (3,28) = 2.449, P = 0.084] at all doses. In the positive control group, metoclopramide (30 mg/kg) significantly alleviated (P < 0.001) cisplatin-induced vomiting bouts, retching and percentage weight loss, while it significantly increased (P < 0.001) the latency to vomiting during the entire observation period. The number of vomiting bouts, retching [F (3,27) = 12.54, P < 0.0001] and % weight loss [F (3,26) = 25.72, P < 0.0001] were significantly reduced (P < 0.001) by all the combinations of selected antioxidants when compared to the cisplatin control. In addition, the vomiting latency [F (3,27) = 3.710, P = 0.0235] was significantly increased (P < 0.05) by the combination of MPG (10 mg/kg) + Vit-C (200 mg/kg) (Table 1). The global reduction in the number of emetic bouts for the selected antioxidants and their combinations decreased in the following respective rank orders: GP > BM-ButFr > MPG > Vit-C and GP + Vit-C > MPG + Vit-C > BM-ButFr + GP. In vitro antioxidant activity of N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C), grape seed proanthocyanidin (GP) and B. monnieri n-butanolic fraction (BM-ButFr) The maximum inhibition of DPPH free radicals by BHT (standard) was 93.82% at 500 μg/mL while those of MPG, Vit-C, GP and BM-ButFr were 96.15% at 200 μg/mL, 96.71% at 500 μg/mL, 92.42% at 50 μg/mL and 90.94% at 100 μg/mL respectively as shown in Table 2. MPG, Vit-C, GP and BM-ButFr or standard (BHT) exhibited concentration dependent declines in spectral absorbance (Fig. 3). The EC50, antiradical power and stoichiometry of MPG, Vit-C, GP, BM-ButFr and BHT are shown in Table 2 and the antioxidant activity, established by EC50 values was decreased in the following rank order: GP > BM-ButFr > MPG > Vit-C > BHT. Percent of DPPH free radical scavenging activity and antioxidant strength of A-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit- C), grape seed proanthocyanidin (GP), B. monnieri n-butanolic fraction (BM-ButFr) or standard butylated hydroxytoluene (BHT) against their respective concentrations Concentration (μg/mL) Percent inhibition (%) 15.0 ± 0.7 Antioxidant strength EC50 (μg/mL) 98.17 ± 3.842 6.498 ± 0.630 Antiradical power 0.0102 ± 0.0004 196.3 ± 7.685 Values are expressed as mean ± S.E.M from three separate experiments DPPH free radical scavenging assay in vitro showing absorbance of N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C), grape seed proanthocyanidin (GP), B. monnieri n-butanolic fraction (BM-ButFr) or standard (butylated hydroxytoluene: BHT) against their respective concentrations. Data are presented as mean ± SD of three separate experiments Correlations between in vitro antioxidant and in vivo antiemetic activities The antiemetic activities in vivo were correlated with the free radical scavenging capacities of the natural and synthetic antioxidants in vitro. Figure 4 shows the high degree of Pearson correlation between the number of vomiting bouts and the % of maximum free radical scavenging capacity of the selected antioxidants. The results manifested positive correlation coefficients for MPG (r = 0.9690), GP (r = 0.9926) and BM-ButFr (r = 0.9635). However, a negative correlation coefficient between the number of bouts and antioxidant activity was observed with Vit-C (r = − 0.9838). The coefficients of determination (R2: a statistical measure of how well the regression lines represent the data), disclosed an association between antioxidant activity and antiemetic assay outcome. This deduction was substantiated by the values obtained for MPG (R 2 = 0.9390), Vit-C (R 2 = 0.9680), GP (R 2 = 0.9853) and BM-ButFr (R 2 = 0.9283) (Fig. 4). Linear correlation showing the involvement of in vitro antioxidant activities of N-(2-mercaptopropionyl) glycine (MPG), vitamin C (Vit-C), grape seed proanthocyanidin (GP) and B. monnieri n-butanolic fraction (BM-ButFr) in the reduction of cisplatin induced vomiting bouts in pigeons In this study, different antioxidants of natural and synthetic origin were evaluated for their antiemetic activity against cisplatin-induced retching and vomiting in pigeons. In emetogenesis studies, different animal models including monkeys [45], pigs [46], ferrets [47], dogs [48], cats [49], house musk shrews [50] and rats [51] have been utilized for evaluating antiemetic compound activity. However, these models have some limitations in terms of cost, ease of handling, absence of a vomiting center, and inability to vomit. We have chosen the pigeon emesis model due to the fact that it expresses readily quantifiable vomiting response parameters as reported in previous studies [3, 52]. The pigeon responds to a number of different emetic stimuli, including cardiac glycosides [53], reserpine [54], sigma receptor ligands [55], 5-HT3 receptor agonists [56] and chemotherapeutic drugs [57]. The pigeon model can also be used to assay the antiemetic activity of several classes of drugs for example NK1 receptor antagonists [58] and glucocorticoids [59]. Vomiting induced by cisplatin is biphasic with an acute phase lasting for 24 h and a delayed phase extending to several days [2]. In pigeons, there is no mechanistically distinct acute or delayed phase of chemotherapy-induced vomiting, although earlier studies have monitored emesis for up to 72 h [59]. In our investigation, we observed the animals for 24 h in order to comply with the ethical use of animals. The dose of cisplatin for induction of emesis varies with the animal model [45, 60, 61]. We utilized cisplatin at a dose of 7.0 mg/kg for pigeons [39] and observed a robust elevation in the number of vomiting bouts, retching and % weight loss after 24 h. Cancer chemotherapy is associated with generation of reactive oxygen species [62] and oxidative stress has been implicated in the emesis caused not only by cisplatin but other chemotherapeutic drugs as well [63]. Numerous studies have shown that the active metabolite of cisplatin i.e. cis-diaqodiammineplatinum generates free radicals that release serotonin from enterochromafin cells which then stimulate 5-HT3 receptors on vagal afferents and initiate the emetic reflex within the brain stem [64, 65]. Since the emetogenic effect of cisplatin is associated with the generation of reactive oxygen species (ROS), administration of antioxidants could detoxify ROS and thereby prevent cisplatin-induced emesis. Accordingly, antioxidants and free radical scavengers have been shown to increase the therapeutic efficacy of chemotherapy by improving tolerance and reducing their dose limiting toxicities [66]. In the present study, we have examined well established synthetic and natural antioxidants, including MPG, Vit-C, GP and BM-ButFr. Moreover, the doses chosen for these antioxidants were tolerable and benign based on their toxicity profiles [67–70]. All the selected antioxidants at the doses tested significantly decreased the number of vomiting bouts, retching and % weight loss whilst at the same time, increasing the latency to vomiting. The most intense antiemetic effect was observed with GP followed by BM-ButFr, MPG then Vit-C and our study is the first to report the antiemetic activity of GP in the pigeon vomit model. GP at doses of 100 and 150 mg/kg produced a complete reversal of emesis as seen by a 100% reduction in the frequency of cisplatin-induced vomiting bouts. Previously, GP at 10 mg/kg has been shown to produce a significant reduction of cisplatin-induced pica behavior in rats and this is exemplified by a decreased kaolin intake [30], which is regarded as being analogous to emesis [51]. Proanthocyanidin, at 200 mg/kg, ameliorated the cisplatin-induced decrease in the activities of antioxidant enzymes, GSH, total protein and albumin [31] while at 250 mg/kg, it alleviated cisplatin-induced hepatotoxicity in rabbits by reducing ROS generation and strengthening endogenous antioxidant systems [32]. It is noteworthy that in rodents, investigations employing a similar dose of grape seed proanthocyanidin as that used in our study, reported beneficial effects which have been attributed to an ability to support the antioxidant defense system [41, 71]. Different plants have been screened against cisplatin-induced emesis in a variety of animal models. These include Zingiber officinale Rosc. (Zingiberaceae) [48], Scutellaria baicalensis Georgi (Lamiaceae) [72] and American ginseng berry (Panax quinquefolius L.) (Araliaceae) [73]. In these studies, the antiemetic effect has been attributed to the free radical scavenging property and an antiserotonergic action of the different active constituents. In the current investigation, the n-butanolic fraction of B. monnieri (BM-ButFr) exhibited a dose dependant antiemetic activity and this accords with our previous report in which BM-ButFr significantly reduced cisplatin-induced emetogenesis [39]. The superior antiemetic effect of BM-ButFr compared to the synthetic antioxidants seen here, may be ascribed to the presence of bacoside-A components, which along with bacopaside I, constitute more than 96% w/w of the total saponins present in the extract [74]. However, the antiemetic activity of B. monnieri may also be mediated through other mechanisms in addition to its antioxidant activity [75] because B. monnieri has both anti-dopaminergic and anti-serotonergic properties [76]. In relation to this, serotonin [77] and dopamine [78] both play an important role in the induction of vomiting at the level of the area postrema. However, serotonin has been shown to differentially mediate the early emetic phase following cisplatin treatment [77]. In line with this, our previous study showed that B. monnieri not only attenuated the cisplatin-induced dopamine upsurge in the area postrema and brain stem but it also diminished the intestinal serotonin concentration in the pigeons [39]. MPG is a well-known synthetic aminothiol antioxidant that has been studied widely especially for its cardioprotective properties [78, 79]. In the present pigeon emetogenesis model, it produced a significant decline in frequencies of cisplatin-induced retching and vomiting and this is consistent with previous reports in such species as dogs [25], rats [80] and Suncus murinus [26]. It is clear that MPG has some proficiency in scavenging generated free radicals and that it exerts its beneficial effects by protecting against oxidative stress [78]. What is more, our study revealed that MPG is effective when given in lower doses since in a higher dose MPG itself causes emesis [25]. Vitamin C is a versatile water soluble antioxidant that is widely used in complementary oncology [81]. In our study, it significantly attenuated cisplatin-induced emetic episodes and this is in accordance with earlier accounts describing diminished cisplatin emesis in dogs and it also inhibited kaolin consumption in cisplatin treated rats [25, 80]. Moreover, at 300 mg/kg, Vit-C produced a 100% reduction of emesis. In relation to such findings, vitamin C has efficacy in reducing cisplatin oxidative stress by improving antioxidant levels, repairing DNA damage and inhibiting lipid peroxidation [82]. We have screened combinations of GP + Vit-C, MPG + Vit-C and BM-ButFr + GP against cisplatin-induced retching and vomiting. Our results showed that the GP + Vit-C combination yielded an equivalent inhibition of emetic episodes to MPG + Vit-C and BM-ButFr + GP. Additionally, the combinations tended to exert comparable, or in some instances marginally improved antiemetic effects than either agent given alone. In this respect, it has been reported that combination of vitamin E plus vitamin C affords enhanced protection against emetic episodes in dogs [25]. Similarly, a combination of vitamins C and E provides superior antiemetic activity than either of the antioxidants alone in cisplatin-induced pica behavior in rats [80]. No single antiemetic is completely effective at blocking emesis in either phase, but when administered together, the antiemetic efficacy of the combination is often greater than that of each agent given individually [77]. The inherent antioxidant potential of MPG, Vit-C, GP and BM-ButFr was evaluated by the DPPH free radical scavenging assay as this method is considered as one of the standard colorimetric methods for the evaluation of antioxidant properties of natural and pure compounds [83]. The antioxidant activity of the tested compounds was quantified in terms of EC50, antiradical power and stoichiometry. Agents with a low EC50/stoichiometry value plus high antiradical power indicated strong antioxidant activity [83, 84]. Consequently, a distinct antioxidant capability was observed for GP which yielded EC50, antiradical power and stoichiometry values of 6.498, 0.1567 and 13.00 respectively, and this was followed in rank order by BM-ButFr (55.61, 0.0159, 111.2), MPG (67.66, 0.0148, 135.3) and Vit-C (69.42, 0.0143, 138.8). These results indicated that as compared to the standard (BHT), all the selected antioxidants possessed strong free radical scavenging capacities, as seen in previous studies [43, 85]. The rank order of antioxidant activity in vitro correlated well with in vivo antiemetic activity, GP having strong antioxidant and antiemetic proclivity among the selected agents. Several studies have reported that antioxidant properties of various plants or synthetic compounds significantly contribute to their antiemetic activities [25, 86]. In the current study, the high degree of correlation between the antioxidant and antiemetic activities as evidenced by their coefficients of determination which implied that amelioration of cisplatin emetogenesis could at least be partially ascribed to the potent free radical scavenging capacities of GP, MPG, BM-ButFr and Vit-C. A substantial body of evidence suggests that oxidative stress is one of the triggering mechanisms in the mediation of vomiting induced by chemotherapeutic agents such as cisplatin [87, 88]. Cisplatin induces lipid peroxidation in the brain, liver and small intestine and releases serotonin by generating free radicals. Antioxidants scavenge the generated free radicals and protect the enterochromafin cells from oxidative injury thereby suppressing the release of serotonin in the emetogenic pathway [26]. Free radical mediated reactions are responsible for a wide range of chemotherapy-induced side effects and antioxidants are able to protect non-malignant cells and organs against some of the damaging effects of cytostatic agents [63]. Dietary supplementation with synthetic and herbal antioxidants ameliorates chemotherapy-induced oxidative stress and diminishes the development of their side effects as well as improving the overall response to therapy [89]. Our study therefore endorses the notion that free radical scavengers may be a beneficial class of prophylactic drugs against cancer chemotherapeutic drug induced emesis. However, further studies are warranted to investigate if there is direct evidence linking reactive oxygen species with oxidative/redox stress injury by assay of oxidized lipid or protein markers after cisplatin treatment. Ultimately, neurochemical analysis should be performed to further correlate the antiemetic effect of these antioxidants with behavioral parameters in this model of emesis. Cisplatin treatment was associated with intense vomiting as exemplified by a significant increase in the number of emetic bouts, retching, % weight loss and a simultaneous decrease in the emetic latency in the pigeon model. Pretreatment with MPG, Vit-C, GP, and BM-ButFr or their combinations significantly attenuated cisplatin-induced elevation of vomiting episodes. The selected agents all possessed potent free radical scavenging capability and consequent antioxidant potential as evaluated via the DPPH free radical scavenging assay. Although all agents exhibited efficacy, GP was conspicuous with respect to antiemetic and antioxidant potential. Evaluation of correlation coefficients disclosed close linear relationships between antioxidant and antiemetic propensity emphasizing the involvement of antioxidant activity in the reduction of cisplatin-induced retching and vomiting episodes for all the agents tested in the study. An erratum to this article is available at http://dx.doi.org/10.1186/s40360-017-0117-x. 5-HT3 : Serotonin 5-HT3 receptor BHT: BM-ButFr: Bacopa monnieri n-butanolic fraction DPPH: 2,2-diphenyl-1-picrylhydrazyl FRSA: DPPH free radical scavenging assay GSH: ROS: Vit-C: We are thankful to the Korea United Pharm. Inc Korea for donating cisplatin active material for this study. All data that support the findings of this study are available from the corresponding author upon reasonable request. FS initiated the idea and guided the research group as supervisor in planning and conducting experiments throughout the research project. JA conducted the experiments and carried out calculations and statistical analysis. He also prepared the initial draft of the manuscript. IU, MS and GA assisted in the experimental work. MS also helped in the analysis and interpretation of data as well as in preparing the final version of the manuscript. RDES revised the manuscript critically for important intellectual content. All authors read and approved the final manuscript. Experiments on laboratory animals were performed in accordance with the UK Animals (Scientific Procedures) Act 1986 and were approved by the Ethical Committee of the Department of Pharmacy, University of Peshawar (Reference No. 14/EC-12/Pharm). Additional file 1: Effect of synthetic and natural antioxidants on cisplatin-induced vomiting bouts. Bouts per hour graph for: N-(2-mercaptopropionyl) glycine (MPG) at 10 mg/kg (Figure S1), 20 mg/kg (Figure S2) and 30 mg/kg (Figure S3); Vitamin C (Vit-C) at 100 mg/kg (Figure S4), 200 mg/kg (Figure S5) and 300 mg/kg (Figure S6); Grape seed proanthocyanidin (GP) at 50 mg/kg (Figure S7), 100 mg/kg (Figure S8) and 150 mg/kg (Figure S9); Bacopa monnieri n-butanolic fraction (BM-ButFr) at 5 mg/kg (Figure S10), 10 mg/kg (Figure S11) and 20 mg/kg (Figure S12). (DOC 3011 kb) Department of Pharmacy, University of Peshawar, Peshawar, 25120, Khyber Pakhtunkhwa, Pakistan Department of Pharmacy, University of Swabi, Swabi, Pakistan Cardiff School of Pharmacy and Pharmaceutical Sciences, Cardiff University, Cardiff, CF103NB, UK Kris MG, Hesketh PJ, Somerfield MR, Feyer P, Clark-Snow R, Koeller JM, et al. American Society of Clinical Oncology guideline for antiemetics in oncology: update 2006. J Clin Oncol. 2006;24(18):2932–47.PubMedView ArticleGoogle Scholar Rudd J, Andrews P. Mechanisms of acute, delayed, and anticipatory emesis induced by anticancer therapies. In: Hesketh PJ (ed) Management of nausea and vomiting in cancer and cancer treatment. Sudbury: Jones and Bartlett Publishers; 2005. pp. 15–66.Google Scholar Tanihata S, Igarashi H, Suzuki M, Uchiyama T. Cisplatin-induced early and delayed emesis in the pigeon. Br J Pharmacol. 2000;130(1):132–8.PubMedPubMed CentralView ArticleGoogle Scholar Yamakuni H, Sawai-Nakayama H, Imazumi K, Maeda Y, Matsuo M, Manda T, et al. Resiniferatoxin antagonizes cisplatin-induced emesis in dogs and ferrets. Eur J Pharmacol. 2002;442(3):273–8.PubMedView ArticleGoogle Scholar Watson J, Gonsalves S, Fossa A, McLean S, Seeger T, Obach S, et al. The anti-emetic effects of CP-99,994 in the ferret and the dog: role of the NK1 receptor. Br J Pharmacol. 1995;115(1):84–94.PubMedPubMed CentralView ArticleGoogle Scholar Andrews P, Davis C. The physiology of emesis induced by anti-cancer therapy. In: Reynolds J, Andrews PLR, Davis CJ (eds) Serotonin and the scientific basis of anti-emetic therapy. Oxford: Oxford Clinical Communications; 1995. pp. 25–49.Google Scholar Okada F, Saito H, Matsuki N. Blockade of motion-and cisplatin-induced emesis by a 5-HT2 receptor agonist in Suncus murinus. Br J Pharmacol. 1995;114(5):931–4.PubMedPubMed CentralView ArticleGoogle Scholar Darmani NA, Crim JL, Janoyan JJ, Abad J, Ramirez J. A re-evaluation of the neurotransmitter basis of chemotherapy-induced immediate and delayed vomiting: evidence from the least shrew. Brain Res. 2009;1248:40–58.PubMedView ArticleGoogle Scholar Darmani NA, Wang Y, Abad J, Ray AP, Thrush GR, Ramirez J. Utilization of the least shrew as a rapid and selective screening model for the antiemetic potential and brain penetration of substance P and NK 1 receptor antagonists. Brain Res. 2008;1214:58–72.PubMedPubMed CentralView ArticleGoogle Scholar Uchiyama T, Kaneko A, Ito R. A simple method for the detection of emetic action using pigeons. J Med Soc Toho. 1978;25:912–4.Google Scholar Tanihata S, Saitou Y, Saitou K, Uchiyama T. Experimental analysis of theophylline-induced emetic response in pigeons. Jpn Pharmacol Ther. 2001;29(1):19–24.Google Scholar Saitou Y, Arakawa S-I, Saitou K-I, Tanihata S. Mechanism of amantadine-induced vomiting in the pigeon. Oyo Yakuri. 2000;59(6):111–21.Google Scholar Muggia F. Platinum compounds 30 years after the introduction of cisplatin: implications for the treatment of ovarian cancer. Gynecol Oncol. 2009;112(1):275–81.PubMedView ArticleGoogle Scholar Lorch JH, Goloubeva O, Haddad RI, Cullen K, Sarlis N, Tishler R, et al. Induction chemotherapy with cisplatin and fluorouracil alone or in combination with docetaxel in locally advanced squamous-cell cancer of the head and neck: long-term results of the TAX 324 randomised phase 3 trial. Lancet Oncol. 2011;12(2):153–9.PubMedPubMed CentralView ArticleGoogle Scholar Köberle B, Tomicic MT, Usanova S, Kaina B. Cisplatin resistance: preclinical findings and clinical implications. Biochim Biophys Acta. 2010;1806(2):172–82.PubMedGoogle Scholar Fernandez-Ortega P, Caloto M, Chirveches E, Marquilles R, San Francisco J, Quesada A, et al. Chemotherapy-induced nausea and vomiting in clinical practice: Impact on patients' quality of life. Support Care Cancer. 2012;20(12):3141–8.PubMedView ArticleGoogle Scholar Weijl N, Wipkink-Bakker A, Lentjes E, Berger H, Cleton F, Osanto S. Cisplatin combination chemotherapy induces a fall in plasma antioxidants of cancer patients. Ann Oncol. 1998;9(12):1331–7.PubMedView ArticleGoogle Scholar Masuda H, Tanaka T, Takahama U. Cisplatin generates superoxide anion by interaction with DNA in a cell-free system. Biochem Biophys Res Commun. 1994;203(2):1175–80.PubMedView ArticleGoogle Scholar Yoshida M, Fukuda A, Hara M, Terada A, Kitanaka Y, Owada S. Melatonin prevents the increase in hydroxyl radical-spin trap adduct formation caused by the addition of cisplatin in vitro. Life Sci. 2003;72(15):1773–80.PubMedView ArticleGoogle Scholar Reuter S, Gupta SC, Chaturvedi MM, Aggarwal BB. Oxidative stress, inflammation, and cancer: how are they linked? Free Radic Biol Med. 2010;49(11):1603–16.PubMedPubMed CentralView ArticleGoogle Scholar Pace A, Savarese A, Picardo M, Maresca V, Pacetti U, Del Monte G, et al. Neuroprotective effect of vitamin E supplementation in patients treated with cisplatin chemotherapy. J Clin Oncol. 2003;21(5):927–31.PubMedView ArticleGoogle Scholar Sahin K, Tuzcu M, Gencoglu H, Dogukan A, Timurkan M, Sahin N, et al. Epigallocatechin-3-gallate activates Nrf2/HO-1 signaling pathway in cisplatin-induced nephrotoxicity in rats. Life Sci. 2010;87(7):240–5.PubMedView ArticleGoogle Scholar Kart A, Cigremis Y, Karaman M, Ozen H. Caffeic acid phenethyl ester (CAPE) ameliorates cisplatin-induced hepatotoxicity in rabbit. Exp Toxicol Pathol. 2010;62(1):45–52.PubMedView ArticleGoogle Scholar Celebi S, Gurdal MM, Ozkul MH, Yasar H, Balikci HH. The effect of intratympanic vitamin C administration on cisplatin-induced ototoxicity. Eur Arch Otorhinolaryngol. 2013;270(4):1293–7.PubMedView ArticleGoogle Scholar Gupta Y, Sharma S. Antiemetic activity of antioxidants against cisplatin-induced emesis in dogs. Environ Toxicol Pharmacol. 1996;1(3):179–84.PubMedView ArticleGoogle Scholar Torii Y, Mutoh M, Saito H, Matsuki N. Involvement of free radicals in cisplatin-induced emesis in Suncus murinus. Eur J Pharmacol. 1993;248(2):131–5.PubMedGoogle Scholar da Silva Porto PAL, Laranjinha JAN, de Freitas VAP. Antioxidant protection of low density lipoprotein by procyanidins: structure/activity relationships. Biochem Pharmacol. 2003;66(6):947–54.PubMedView ArticleGoogle Scholar Bagchi D, Bagchi M, Stohs SJ, Ray SD, Sen CK, Preuss HG. Cellular protection with proanthocyanidins derived from grape seeds. Ann N Y Acad Sci. 2002;957(1):260–70.PubMedView ArticleGoogle Scholar Devi A, Jolitha AB, Ishii N. Grape seed proanthocyanidin extract (GSPE) and antioxidant defense in the brain of adult rats. Med Sci Monit. 2006;12(4):BR124–BR9.PubMedGoogle Scholar Wang C-Z, Fishbein A, Aung HH, Mehendale SR, Chang W-T, Xie J-T, et al. Polyphenol contents in grape-seed extracts correlate with antipica effects in cisplatin-treated rats. J Altern Complement Med. 2005;11(6):1059–65.PubMedView ArticleGoogle Scholar Yousef MI, Saad AA, El-Shennawy LK. Protective effect of grape seed proanthocyanidin extract against oxidative stress induced by cisplatin in rats. Food Chem Toxicol. 2009;47(6):1176–83.PubMedView ArticleGoogle Scholar Kandemir F, Benzer E, Ozkaraca M, Ceribasi S, Yildirim NC, Ozdemir N. Protective antioxidant effects of grape seed extract in a cisplatin-induced hepatotoxicity model in rabbits. Rev Med Vet-Toulouse. 2012;163(11):539–45.Google Scholar Devi SA, Chandrasekar BS, Manjula K, Ishii N. Grape seed proanthocyanidin lowers brain oxidative stress in adult and middle-aged rats. Exp Gerontol. 2011;46(11):958–64.View ArticleGoogle Scholar Kunnel Shinomol G, Raghunath N, Mukunda Srinivas Bharath M. Prophylaxis with Bacopa monnieri attenuates acrylamide induced neurotoxicity and oxidative damage via elevated antioxidant function. Cent Nerv Syst Agents Med Chem. 2013;13(1):3–12.View ArticleGoogle Scholar Pase MP, Kean J, Sarris J, Neale C, Scholey AB, Stough C. The cognitive-enhancing effects of Bacopa monnieri: A systematic review of randomized, controlled human clinical trials. J Altern Complement Med. 2012;18(7):647–52.PubMedView ArticleGoogle Scholar Sharma S, Gupta Y. Effect of antioxidants on cisplatin induced delay in gastric emptying in rats. Environ Toxicol Pharmacol. 1997;3(1):41–6.PubMedView ArticleGoogle Scholar Kahol AP, Singh T, Tandon S, Gupta MM, Khanuja SPS. Process for the preparation of a extract rich in bacosides from the herb Bacopa monniera. Google Patents. US patent 6833143 B1, 21 Dec 2004. http://www.google.co.in/patents/US6833143. Russo A, Borrelli F. Bacopa monniera, a reputed nootropic plant: an overview. Phytomedicine. 2005;12(4):305–17.PubMedView ArticleGoogle Scholar Ullah I, Subhan F, Rudd JA, Rauf K, Alam J, Shahid M, et al. Attenuation of cisplatin-induced emetogenesis by standardized Bacopa monnieri extracts in the pigeon: Behavioral and neurochemical correlations. Planta Med. 2014;80(17):1569–79.PubMedView ArticleGoogle Scholar Preziosi P, D'Amato M, Del Carmine R, Martire M, Pozzoli G, Navarra P. The effects of 5-HT3 receptor antagonists on cisplatin-induced emesis in the pigeon. Eur J Pharmacol. 1992;221(2):343–50.PubMedView ArticleGoogle Scholar Bagchi D, Garg A, Krohn R, Bagchi M, Bagchi D, Balmoori J, et al. Protective effects of grape seed proanthocyanidins and selected antioxidants against TPA-induced hepatic and brain lipid peroxidation and DNA fragmentation, and peritoneal macrophage activation in mice. Gen Pharmacol-Vasc S. 1998;30(5):771–6.View ArticleGoogle Scholar Ding Y, Dai X, Jiang Y, Zhang Z, Bao L, Li Y, et al. Grape seed proanthocyanidin extracts alleviate oxidative stress and ER stress in skeletal muscle of low-dose streptozotocin-and high-carbohydrate/high-fat diet-induced diabetic rats. Mol Nutr Food Res. 2013;57(2):365–9.PubMedView ArticleGoogle Scholar Shahid M, Subhan F. Protective effect of Bacopa monniera methanol extract against carbon tetrachloride induced hepatotoxicity and nephrotoxicity. Pharmacologyonline. 2014;2(2):18–28.Google Scholar Shahid M, Subhan F, Ullah I, Ali G, Alam J, Shah R. Beneficial effects of Bacopa monnieri extract on opioid induced toxicity. Heliyon. 2016;2(2):e00068.PubMedPubMed CentralView ArticleGoogle Scholar Fukui H, Yamamoto M, Sasaki S, Sato S. Involvement of 5-HT3 receptors and vagal afferents in copper sulfate-and cisplatin-induced emesis in monkeys. Eur J Pharmacol. 1993;249(1):13–8.PubMedView ArticleGoogle Scholar Forsyth D, Yoshizawa T, Morooka N, Tuite J. Emetic and refusal activity of deoxynivalenol to swine. Appl Environ Microbiol. 1977;34(5):547–52.PubMedPubMed CentralGoogle Scholar Kamato T, Ito H, Nagakura Y, Nishida A, Yuki H, Yamano M, et al. Mechanisms of cisplatin-and m-chlorophenylbiguinide-induced emesis in ferrets. Eur J Pharmacol. 1993;238(2):369–76.PubMedView ArticleGoogle Scholar Sharma S, Kochupillai V, Gupta S, Seth S, Gupta Y. Antiemetic efficacy of ginger (Zingiber officinale) against cisplatin-induced emesis in dogs. J Ethnopharmacol. 1997;57(2):93–6.PubMedView ArticleGoogle Scholar Smith WL, Callaham EM, Alphin RS. The emetic activity of centrally administered cisplatin in cats and its antagonism by zacopride. J Pharm Pharmacol. 1988;40(2):142–3.PubMedView ArticleGoogle Scholar Kwiatkowska M, Parker LA, Burton P, Mechoulam R. A comparative analysis of the potential of cannabinoids and ondansetron to suppress cisplatin-induced emesis in the Suncus murinus (house musk shrew). Psychopharmacology (Berl). 2004;174(2):254–9.View ArticleGoogle Scholar Takeda N, Hasegawa S, Morita M, Matsunaga T. Pica in rats is analogous to emesis: an animal model in emesis research. Pharmacol Biochem Be. 1993;45(4):817–21.View ArticleGoogle Scholar Navarra P, Martire M, del Carmine R, Pozzoli G, Preziosi P. A dual effect of some 5-HT3 receptor antagonists on cisplatin-induced emesis in the pigeon. Toxicol Lett. 1992;64:745–9.PubMedView ArticleGoogle Scholar Hanzlik P, Wood D. The mechanism of digitalis-emesis in pigeons. J Pharmacol Exp Ther. 1929;37(1):67–100.Google Scholar Gupta G, Dhawan B. Blockade of reserpine emesis in pigeons. Arch Int Pharmacodyn Ther. 1960;128:481–90.PubMedGoogle Scholar Hudzik TJ. Sigma ligand-induced emesis in the pigeon. Pharmacol Biochem Be. 1992;41(1):215–7.View ArticleGoogle Scholar Wolff MC, Leander JD. Comparison of the antiemetic effects of a 5-HT 1A agonist, LY228729, and 5-HT 3 antagonists in the pigeon. Pharmacol Biochem Be. 1995;52(3):571–5.View ArticleGoogle Scholar Wolff MC, Leander JD. Effects of a 5-HT 1A receptor agonist on acute and delayed cyclophosphamide-induced vomiting. Eur J Pharmacol. 1997;340(2):217–20.PubMedView ArticleGoogle Scholar Tanihata S, Oda S, Kakuta S, Uchiyama T. Antiemetic effect of a tachykinin NK1 receptor antagonist GR205171 on cisplatin-induced early and delayed emesis in the pigeon. Eur J Pharmacol. 2003;461(2–3):197–206.PubMedView ArticleGoogle Scholar Tanihata S, Oda S, Nakai S, Uchiyama T. Antiemetic effect of dexamethasone on cisplatin-induced early and delayed emesis in the pigeon. Eur J Pharmacol. 2004;484(2):311–21.PubMedView ArticleGoogle Scholar Gylys J, Doran K, Buyniski J. Antagonism of cisplatin induced emesis in the dog. Res Commun Chem Pathol Pharmacol. 1979;23(1):61–8.PubMedGoogle Scholar Nakayama H, Yamakuni H, Higaki M, Ishikawa H, Imazumi K, Matsuo M, et al. Antiemetic activity of FK1052, a 5-HT3-and 5-HT4-receptor antagonist, in Suncus murinus and ferrets. J Pharmacol Sci. 2005;98(4):396–403.PubMedView ArticleGoogle Scholar Sangeetha P, Das U, Koratkar R, Suryaprabha P. Increase in free radical generation and lipid peroxidation following chemotherapy in patients with cancer. Free Radic Biol Med. 1990;8(1):15–9.PubMedView ArticleGoogle Scholar Weijl N, Cleton F, Osanto S. Free radicals and antioxidants in chemotherapy induced toxicity. Cancer Treat Rev. 1997;23(4):209–40.PubMedView ArticleGoogle Scholar Aapro M, Jordan K, Feyer P. Pathophysiology and classification of chemotherapy-induced nausea and vomiting. Prevention of nausea and vomiting in cancer patients. London: Springer Healthcare; 2013. p. 5–14.Google Scholar Andrews PL, Horn CC. Signals for nausea and emesis: Implications for models of upper gastrointestinal diseases. Auton Neurosci. 2006;125(1):100–15.PubMedPubMed CentralView ArticleGoogle Scholar Block KI, Koch AC, Mead MN, Tothy PK, Newman RA, Gyllenhaal C. Impact of antioxidant supplementation on chemotherapeutic efficacy: a systematic review of the evidence from randomized controlled trials. Cancer Treat Rev. 2007;33(5):407–18.PubMedView ArticleGoogle Scholar Ray S, Bagchi D, Lim PM, Bagchi M, Gross SM, Kothari SC, et al. Acute and long-term safety evaluation of a novel IH636 grape seed proanthocyanidin extract. Res Commun Mol Pathol Pharmacol. 2000;109(3–4):165–97.Google Scholar Abbas M, Subhan F, Mohani N, Rauf K, Ali G, Khan M. The involvement of opioidergic mechanisms in the activity of Bacopa monnieri extract and its toxicological studies. Afr J Pharm Pharmacol. 2011;5(8):1120–4.Google Scholar Chiba T. Effect of sulfur-containing compounds on experimental diabetes. VI.: Screening of hypoglycemic action of sulfur-containing compounds. Yakugaku Zasshi. 1969;89(8):1138–43.PubMedGoogle Scholar Ash M, Ash I. Handbook of Preservatives. NY: Synapse Info Resources; 2004.Google Scholar Bagchi M, Milnes M, Williams C, Balmoori J, Ye X, Stohs S, et al. Acute and chronic stress-induced oxidative gastrointestinal injury in rats, and the protective ability of a novel grape seed proanthocyanidin extract. Nutr Res. 1999;19(8):1189–99.View ArticleGoogle Scholar Aung HH, Dey L, Mehendale S, Xie J-T, Wu JA, Yuan C-S. Scutellaria baicalensis extract decreases cisplatin-induced pica in rats. Cancer Chemother Pharmacol. 2003;52(6):453–8.PubMedView ArticleGoogle Scholar Mehendale S, Aung H, Wang A, Yin J-J, Wang C-Z, Xie J-T, et al. American ginseng berry extract and ginsenoside Re attenuate cisplatin-induced kaolin intake in rats. Cancer Chemother Pharmacol. 2005;56(1):63–9.PubMedView ArticleGoogle Scholar Deepak M, Amit A. 'Bacoside B' - the need remains for establishing identity. Fitoterapia. 2013;87(2013):7–10.PubMedView ArticleGoogle Scholar Bhattacharya S, Bhattacharya A, Kumar A, Ghosal S. Antioxidant activity of Bacopa monniera in rat frontal cortex, striatum and hippocampus. Phytother Res. 2000;14(3):174–9.PubMedView ArticleGoogle Scholar Rauf K, Subhan F, Sewell RDE. A bacoside containing Bacopa monnieri extract reduces both morphine hyperactivity plus the elevated striatal dopamine and serotonin turnover. Phytother Res. 2011;26:758–63.PubMedView ArticleGoogle Scholar Hesketh P, Van Belle S, Aapro M, Tattersall F, Naylor R, Hargreaves R, et al. Differential involvement of neurotransmitters through the time course of cisplatin-induced emesis as revealed by therapy with specific receptor antagonists. Eur J Cancer. 2003;39(8):1074–80.PubMedView ArticleGoogle Scholar M-o D, Morita T, Yamashita N, Nishida K, Yamaguchi O, Higuchi Y, et al. The antioxidant N-2-mercaptopropionyl glycine attenuates left ventricular hypertrophy in in vivo murine pressure-overload model. J Am Coll Cardiol. 2002;39(5):907–12.View ArticleGoogle Scholar Tanonaka K, Iwai T, Motegi K, Takeo S. Effects of N-(2-mercaptopropionyl)-glycine on mitochondrial function in ischemic–reperfused heart. Cardiovasc Res. 2003;57(2):416–25.PubMedView ArticleGoogle Scholar Sharma S, Gupta S, Kochupillai V, Seth S, Gupta Y. Cisplatin-induced pica behaviour in rats is prevented by antioxidants with antiemetic activity. Environ Toxicol Pharmacol. 1997;3(2):145–9.PubMedView ArticleGoogle Scholar Grober U. Antioxidants and other micronutrients in complementary oncology. Breast Care. 2009;4(1):13.PubMedPubMed CentralView ArticleGoogle Scholar Suhail N, Bilal N, Khan H, Hasan S, Sharma S, Khan F, et al. Effect of vitamins C and E on antioxidant status of breast‐cancer patients undergoing chemotherapy. J Clin Pharm Ther. 2012;37(1):22–6.PubMedView ArticleGoogle Scholar Mishra K, Ojha H, Chaudhury NK. Estimation of antiradical properties of antioxidants using DPPH assay: A critical review and results. Food Chem. 2012;130(4):1036–43.View ArticleGoogle Scholar Loo A, Jain K, Darah I. Antioxidant and radical scavenging activities of the pyroligneous acid from a mangrove plant, Rhizophora apiculata. Food Chem. 2007;104(1):300–7.View ArticleGoogle Scholar Chun-yang L. Measuring the antiradical efficiency of proanthocyanidin from grape seed by the DPPH · assay. J Food Sci Biotech. 2006;2:102–6.Google Scholar Yang Y, Kinoshita K, Koyama K, Takahashi K, Tai T, Nunoura Y, et al. Novel experimental model using free radical-induced emesis for surveying anti-emetic compounds from natural sources. Planta Med. 1999;65(06):574–6.PubMedView ArticleGoogle Scholar Johnston KD, Lu Z, Rudd JA. Looking beyond 5-HT3 receptors: A review of the wider role of serotonin in the pharmacology of nausea and vomiting. Eur J Pharmacol. 2014;722:13–25.PubMedView ArticleGoogle Scholar Miner WD, Sanger GJ. Inhibition of cisplatin-induced vomiting by selective 5-Hydroxytryptamine M-receptor antagonism. Br J Pharmacol. 1986;88(3):497–9.PubMedPubMed CentralView ArticleGoogle Scholar Conklin KA. Dietary antioxidants during cancer chemotherapy: Impact on chemotherapeutic effectiveness and development of side effects. Nutr Cancer. 2000;37(1):1–18.PubMedView ArticleGoogle Scholar Basic pharmacology
CommonCrawl
$ L^p $-exact controllability of partial differential equations with nonlocal terms EECT Home Global classical and weak solutions of the prion proliferation model in the presence of chaperone in a banach space doi: 10.3934/eect.2021028 Online First articles are published articles within a journal that have not yet been assigned to a formal issue. This means they do not yet have a volume number, issue number, or page numbers assigned to them, however, they can still be found and cited using their DOI (Digital Object Identifier). Online First publication benefits the research community by making new scientific discoveries known as quickly as possible. Readers can access Online First articles via the "Online First" tab for the selected journal. On the Cauchy problem for a derivative nonlinear Schrödinger equation with nonvanishing boundary conditions Phan Van Tin Institut de Mathématiques de Toulouse, UMR5219, Université de Toulouse, CNRS, UPS IMT, F-31062 Toulouse Cedex 9, France Received January 2021 Revised April 2021 Early access May 2021 In this paper we consider the Schrödinger equation with nonlinear derivative term. Our goal is to initiate the study of this equation with non vanishing boundary conditions. We obtain the local well posedness for the Cauchy problem on Zhidkov spaces $ X^k( \mathbb{R}) $ and in $ \phi+H^k( \mathbb{R}) $. Moreover, we prove the existence of conservation laws by using localizing functions. Finally, we give explicit formulas for stationary solutions on Zhidkov spaces. Keywords: Nonlinear derivative Schrödinger equations, Cauchy problem, nonvanishing boundary condition. Mathematics Subject Classification: 35Q55, 35A01. Citation: Phan Van Tin. On the Cauchy problem for a derivative nonlinear Schrödinger equation with nonvanishing boundary conditions. Evolution Equations & Control Theory, doi: 10.3934/eect.2021028 F. Béthuel, P. Gravejat and D. Smets, Stability in the energy space for chains of solitons of the one-dimensional Gross-Pitaevskii equation, Ann. Inst. Fourier (Grenoble), 64 (2014), 19-70. doi: 10.5802/aif.2838. Google Scholar T. Cazenave, Semilinear Schrödinger Equations, volume 10 of Courant Lecture Notes in Mathematics, New York University, Courant Institute of Mathematical Sciences, New York, 2003. doi: 10.1090/cln/010. Google Scholar T. Cazenave and A. Haraux, An Introduction to Semilinear Evolution Equations, volume 13 of Oxford Lecture Series in Mathematics and its Applications, The Clarendon Press Oxford University Press, New York, 1998. Google Scholar A. de Laire, Global well-posedness for a nonlocal Gross-Pitaevskii equation with non-zero condition at infinity, Comm. Partial Differential Equations, 35 (2010), 2021-2058. doi: 10.1080/03605302.2010.497200. Google Scholar C. Gallo, Schrödinger group on Zhidkov spaces, Adv. Differential Equations, 9 (2004), 509-538. Google Scholar C. Gallo, The Cauchy problem for defocusing nonlinear Schrödinger equations with non-vanishing initial data at infinity, Comm. Partial Differential Equations, 33 (2008), 729-771. doi: 10.1080/03605300802031614. Google Scholar P. Gérard, The Cauchy problem for the Gross-Pitaevskii equation, Ann. Inst. H. Poincaré Anal. Non Linéaire, 23 (2006), 765-779. doi: 10.1016/j.anihpc.2005.09.004. Google Scholar P. Gérard, The Gross-Pitaevskii equation in the energy space, In Stationary and Time Dependent Gross-Pitaevskii Equations, volume 473 of Contemp. Math., Amer. Math. Soc., Providence, RI, 2008, pages 129–148. doi: 10.1090/conm/473/09226. Google Scholar M. Hayashi and T. Ozawa, Well-posedness for a generalized derivative nonlinear Schrödinger equation, J. Differential Equations, 261 (2016), 5424-5445. doi: 10.1016/j.jde.2016.08.018. Google Scholar N. Hayashi and T. Ozawa, On the derivative nonlinear Schrödinger equation, Phys. D, 55 (1992), 14-36. doi: 10.1016/0167-2789(92)90185-P. Google Scholar N. Hayashi and T. Ozawa, Finite energy solutions of nonlinear Schrödinger equations of derivative type, SIAM J. Math. Anal., 25 (1994), 1488-1503. doi: 10.1137/S0036141093246129. Google Scholar S. Le Coz, Standing waves in nonlinear Schrödinger equations, In Analytical and Numerical Aspects of Partial Differential Equations, Walter de Gruyter, Berlin, 2009, pages 151–192. Google Scholar E. Mjølhus, On the modulational instability of hydromagnetic waves parallel to the magnetic field, Journal of Plasma Physics, 16 (1976), 321-334. doi: 10.1017/S0022377800020249. Google Scholar M. Murai, K. Sakamoto and S. Yotsutani, Representation formula for traveling waves to a derivative nonlinear Schrödinger equation with the periodic boundary condition, Discrete Contin. Dyn. Syst., (Dynamical systems, differential equations and applications. 10th AIMS Conference. Suppl.), 2015,878–900. doi: 10.3934/proc.2015.0878. Google Scholar C. Sulem and P.-L. Sulem, The Nonlinear Schrödinger Equation, volume 139 of Applied Mathematical Sciences, Springer-Verlag, New York, 1999. Self-focusing and wave collapse. Google Scholar M. Tsutsumi and I. Fukuda, On solutions of the derivative nonlinear Schrödinger equation. Existence and uniqueness theorem, Funkcial. Ekvac., 23 (1980), 259-277. Google Scholar M. Tsutsumi and I. Fukuda, On solutions of the derivative nonlinear Schrödinger equation. II, Funkcial. Ekvac., 24 (1981), 85-94. Google Scholar P. E. Zhidkov, Korteweg-de Vries and Nonlinear Schrödinger Equations: Qualitative Theory, volume 1756 of Lecture Notes in Mathematics., Springer-Verlag, Berlin, 2001. Google Scholar Hiroyuki Hirayama, Mamoru Okamoto. Random data Cauchy problem for the nonlinear Schrödinger equation with derivative nonlinearity. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 6943-6974. doi: 10.3934/dcds.2016102 Paolo Antonelli, Daniel Marahrens, Christof Sparber. On the Cauchy problem for nonlinear Schrödinger equations with rotation. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 703-715. doi: 10.3934/dcds.2012.32.703 Shubin Wang, Guowang Chen. Cauchy problem for the nonlinear Schrödinger-IMBq equations. Discrete & Continuous Dynamical Systems - B, 2006, 6 (1) : 203-214. doi: 10.3934/dcdsb.2006.6.203 Minoru Murai, Kunimochi Sakamoto, Shoji Yotsutani. Representation formula for traveling waves to a derivative nonlinear Schrödinger equation with the periodic boundary condition. Conference Publications, 2015, 2015 (special) : 878-900. doi: 10.3934/proc.2015.0878 Changxing Miao, Bo Zhang. Global well-posedness of the Cauchy problem for nonlinear Schrödinger-type equations. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 181-200. doi: 10.3934/dcds.2007.17.181 Shuai Zhang, Shaopeng Xu. The probabilistic Cauchy problem for the fourth order Schrödinger equation with special derivative nonlinearities. Communications on Pure & Applied Analysis, 2020, 19 (6) : 3367-3385. doi: 10.3934/cpaa.2020149 Editorial Office. Retraction: The probabilistic Cauchy problem for the fourth order Schrödinger equation with special derivative nonlinearities. Communications on Pure & Applied Analysis, 2020, 19 (7) : 3785-3785. doi: 10.3934/cpaa.2020167 Hideo Takaoka. Energy transfer model for the derivative nonlinear Schrödinger equations on the torus. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5819-5841. doi: 10.3934/dcds.2017253 Nakao Hayashi, Pavel I. Naumkin, Patrick-Nicolas Pipolo. Smoothing effects for some derivative nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 1999, 5 (3) : 685-695. doi: 10.3934/dcds.1999.5.685 Hideo Takaoka. Energy transfer model and large periodic boundary value problem for the quintic nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2020, 40 (11) : 6351-6378. doi: 10.3934/dcds.2020283 Binhua Feng, Xiangxia Yuan. On the Cauchy problem for the Schrödinger-Hartree equation. Evolution Equations & Control Theory, 2015, 4 (4) : 431-445. doi: 10.3934/eect.2015.4.431 Binhua Feng, Dun Zhao. On the Cauchy problem for the XFEL Schrödinger equation. Discrete & Continuous Dynamical Systems - B, 2018, 23 (10) : 4171-4186. doi: 10.3934/dcdsb.2018131 Christina A. Hollon, Jeffrey T. Neugebauer. Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition. Conference Publications, 2015, 2015 (special) : 615-620. doi: 10.3934/proc.2015.0615 Yuanyuan Ren, Yongsheng Li, Wei Yan. Sharp well-posedness of the Cauchy problem for the fourth order nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 487-504. doi: 10.3934/cpaa.2018027 JinMyong An, JinMyong Kim, KyuSong Chae. Continuous dependence of the Cauchy problem for the inhomogeneous nonlinear Schrödinger equation in $H^{s} (\mathbb R^{n})$. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021221 Abderrazak Chrifi, Mostafa Abounouh, Hassan Al Moatassime. Galerkin method of weakly damped cubic nonlinear Schrödinger with Dirac impurity, and artificial boundary condition in a half-line. Discrete & Continuous Dynamical Systems - S, 2022, 15 (1) : 79-93. doi: 10.3934/dcdss.2021030 Yuji Sagawa, Hideaki Sunagawa. The lifespan of small solutions to cubic derivative nonlinear Schrödinger equations in one space dimension. Discrete & Continuous Dynamical Systems, 2016, 36 (10) : 5743-5761. doi: 10.3934/dcds.2016052 Hiroyuki Hirayama. Well-posedness and scattering for a system of quadratic derivative nonlinear Schrödinger equations with low regularity initial data. Communications on Pure & Applied Analysis, 2014, 13 (4) : 1563-1591. doi: 10.3934/cpaa.2014.13.1563 Chengchun Hao. Well-posedness for one-dimensional derivative nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2007, 6 (4) : 997-1021. doi: 10.3934/cpaa.2007.6.997 Dorina Mitrea, Marius Mitrea, Sylvie Monniaux. The Poisson problem for the exterior derivative operator with Dirichlet boundary condition in nonsmooth domains. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1295-1333. doi: 10.3934/cpaa.2008.7.1295
CommonCrawl
Congruence modulo n is an equivalence relation on the integers. It partitions the integers into n equivalence classes. The equivalence classes are called the residues modulo n. Any member of a residue is a representative. Arithmetic can be defined on residues. Addition, Additive Inverse, and Multiplication Sometimes we can perform operations on representatives of the residues using customary integer operations and the result will be a representative of the residue we would have got if we performed the corresponding operations on the residue classes. This is true for addition and multiplication and we can use this to show that addition and multiplication on residues are both associative and commutative. They also observe the distributive law. The additive identity is the reside class with zero in it. The additive inverse residue exists and can be found by computing the additive inverse of a representative. In summary, the residues modulo n are a commutative ring with identity and addition, negation, and multiplication can be calculated by doing the corresponding operations on representatives. Exponentiation Exponentiation can be calculated using multiplication, so the preceding section shows us that we can compute exponentiation on a residue by using a representative. Note that the base is a residue, but the exponent is an integer. There is a shortcut for computing exponents due to Euler when a and n are relatively prime: $$ \begin{align} a^{\phi(n)} \equiv 1 \;(\text{mod}\;n) \end{align} $$ The congruence allows us to compute any exponent with fewer than φ(n) multiplications. The function φ is called Euler's totient, and it counts the positive integers less than or equal to n which are relatively prime to n. An expression for the totient is $$ \begin{align} \phi(n) = n \prod_{p | n} \frac{p - 1}{p} \end{align} $$ where the product is over all primes that divide n. When n is itself prime, Euler's Theorem reduces to Fermat's Little Theorem: $$ \begin{align} a^{p-1} \equiv 1 \;(\text{mod}\; p) \end{align} $$ Multiplicative Inverse and Division Division on the integers is often not defined. An integer a is divisible by b if there is another integer m such that: $$ \begin{align} a = mb \end{align} $$ Multiplicative inverses for most integers don't exist. The exceptions are 1 and -1, which are their own inverses. For residues, the situation is better. A nonzero residue a modulo n has a multiplicative inverse if and only if a and n are relatively prime. The extended Euclidean algorithm will find x and y such that ax + ny = 1, and thus x is the multiplicative inverse of a. If the modulus n is prime, then all nonzero residues have multiplicative inverses and the residues are a field. If n is not prime, the residues are not an integral domain and cancellation of a nonzero factor is not always possible. If d is the greatest common divisor of a and n, then for all z and z' we have this result: $$ \begin{align} az \equiv az' \;(\text{mod}\; n) \iff z \equiv z' \;\left(\text{mod}\; \frac{n}{d}\right) \end{align} $$ If a and n are relatively prime, which means d is 1, we can cancel a from both sides of the equation. Non-zero complex numbers have two square roots. Positive real numbers have two square roots and negative real numbers have none. Positive integers have two square roots if they are perfect squares, otherwise they have none. Non-zero residues have either two square roots or none. In the former case, the residue is said to be a quadratic residue and in the latter case a quadratic nonresidue. Determining whether a residue is a quadratic residue is complicated. The following notation, called the Legendre symbol, is used when p is an odd prime: $$ \begin{align} \left( \frac{a}{p} \right) = \begin{cases} \;\; 1 \;\;\; a \; \text{is a quadratic residue} \\ \;\; 0 \;\;\; p \mid a \\ -1 \;\;\; a \; \text{is a quadratic nonresidue} \end{cases} \end{align} $$ The following always hold: $$ \begin{align} \left( \frac{1}{p} \right) = 1 \\ \left( \frac{-1}{p} \right) = (-1)^{\frac{p-1}{2}} \\ \left( \frac{2}{p} \right) = (-1)^{\frac{p^2 - 1}{8}} \\ \left( \frac{a}{p} \right) \left( \frac{b}{p} \right) = \left( \frac{ab}{p} \right) \end{align} $$ When a and p are relatively prime then $$ \begin{align} \left(\frac{a^2}{p} \right) = 1 \end{align} $$ The Jacobi symbol is a generalization of the Legendre symbol where p is replaced by any positive odd integer n. The Kronecker symbol is a further generalization where n can be any non-zero integer. The quadratic reciprocity law states that for odd primes q and p which both are are of the form 4k +3, then exactly one of the following congruences has a solution: $$ \begin{align} x^2 \equiv q \;(\text{mod}\;p) \end{align} $$ $$ \begin{align} x^2 \equiv p \;(\text{mod}\;q) \end{align} $$ Moreover, if q and p are odd primes not both of the form 4k + 3, then both congruences are solvable or both congruences are not. How the quadratic reciprocity law is used: $$ \begin{align} \left(\frac{3}{5}\right) = \left(\frac{5}{3}\right) = \left(\frac{2}{3}\right) = (-1)^{\frac{3^2 - 1}{8}} = -1 \end{align} $$ $$ \begin{align} \left(\frac{3}{7}\right) = -\left(\frac{7}{3}\right) = -\left(\frac{1}{3}\right) = -1 \end{align} $$ non-prime residue modulus 2 non-prime modulus how to find the square roots Discrete Logarithm The discrete log of g base b, where both g and b are residues, is an integer k such at bk = g. A brute force search has run time which is linear in the size of the multiplicative group, or exponential in the number of digits in the size of the multiplicative group. Better algorithms exist, but none are polynomial in the number of digits in the size of the multiplicative group. Chinese Remainder Theorem If we have multiple equations with the same modulus, we can use substitution to find a solution. If the moduli on the equations are different, the Chinese Remainder Theorem tells us there is a solution under certain conditions. In particular there is a solution a to the following system of equations, provided that the ni are all pairwise relatively prime: $$ \begin{align} a \equiv a_i \mod n_i \;\;\; (i = 1,\ldots,k) \end{align} $$ Moreover, if a and a' are two solutions, then: $$ \begin{align} a \equiv a' \mod \prod_{i=1}^k n_i \end{align} $$
CommonCrawl
Volume of an Oblate Spheroid In this lesson, we'll discuss how by using the concept of a definite integral one can calculate the volume of something called an oblate spheroid. An oblate spheroid is essentially just a sphere which is compressed or stretched along one of its dimensions while leaving its other two dimensions unchanged. For example, the Earth is technically not a sphere—it is an oblate spheroid. To find the volume of an oblate spheroid, we'll start out by finding the volume of a paraboloid . (If you cut an oblate spheroid in half, the two left over pieces would be paraboloids.) To do this, we'll draw an \(n\) number of cylindrical shells inside of the paraboloid; by taking the Riemann sum of the volume of each cylindrical shell, we can obtain an estimate of the volume enclosed inside of the paraboloid. If we then take the limit of this sum as the number of cylindrical shells approaches infinity and their volumes approach zero, we'll obtain a definite integral which gives the exact volume inside of the paraboloid. After computing this definite integral, we'll multiply the result by two to get the volume of the oblate spheroid. Figure 1: Graph of the ellipse \(\frac{x^2}{9}+\frac{y^2}{4}=1\) centered at the origin of the \(xy\)-plane. Also, I have drawn the \(i^{th}\) rectangle underneath the quarter-ellipse within the first quadrant. There are an \(n\) number of such rectangles underneath this quarter-ellipse along the interval \(Δx=3-0\). Finding volume of an oblate spheroid In Figure 1, I have graphed the ellipse \(\frac{x^2}{9}+{y^2}{4}=1\) on the \(xy\)-plane. If we rotate the eclipse about either the \(x\)-axis or \(y\)-axis, the ellipse will trace out the closed surface illustrated in Figure 3. The volume of revolution which that surface encloses is called an oblate spheroid. In this lesson, we'll use the concept of a definite integral to calculate the volume of an oblate spheroid. To calculate this volume, we'll first approximate the volume by summing the volumes of an \(n\) number of cylindrical shells (see Figure 2) drawn within the oblate spheroid. After that, we'll take the limit of this sum as \(n→∞\). Figure 2: A cylindrical shell is obtained by revolving the rectangle \(f(x_i)Δx\) about the \(y\)-axis. Doing this for all \(n\) rectangles, we get an \(n\) number of shells. By summing the volumes of these \(n\) number of cylindrical shells, we can obtain an estimate for total volume enclosed inside of the paraboloid obtained by rotating the quarter-ellipse (the one in the upper-right quadrant) about the \(x\)-axis. But before we do that, let's discuss how to construct a cylindrical shell and how to calculate its volume. Let's subdivide the interval on the \(x\)-axis, \(Δx=3-0\), into an \(n\) number of equally spaced tick marks; let's label each tick mark with \(x_i\) where \(i=1,...,n\). In Figure 1, I have drawn a rectangle with with \(Δx=x_{i+1}-x_i\) and height \(f(x_i)\). If we rotate this rectangle about the \(y\)-axis, the rectangle will trace out the cylindrical shell illustrated in Figure 2. To calculate the volume of the cylindrical shell, we must take the product of the area of the cylindrical shell's base with its height. The ring \(QQ'RR'\) with width \(Δx_{i+1}-x_i\) in Figure 2 is the cylindrical shell's base. Let's subtract the area of the inner circle \(QQ'\) from the area of the outer circle \(RR'\) in Figure 2 to get the area of the cylindrical shell's base: $$A=π(x_{i+1})^2-π(x_i)^2.\tag{1}$$ Using basic algebra, we can rewrite Equation (1) as $$A=π\frac{x_i+x_{i+1}}{2}\biggl[2(x_{i+1}-x_i)\biggr].\tag{2}$$ The term \((x_i+x_{i+1}/2\) in Equation (2) is the average value of \(x_i\) and \(x_{i+1}\). In Figure 1 (click to enlarge), I have labeled the average of these two values as \(\bar{x}_i\) on the \(x\)-axis. Substituting \(\bar{x}_i\) into Equation (2), we have $$A=2π\bar{x}_iΔx.\tag{3}$$ (You might be asking yourself why we went through the trouble of rewriting Equation (1) of the form expressed in Equation (3). The reason why we did this will become evident when we wish to express the limit of the sum of the volumes of each cylindrical shell as a definite integral. But we'll discuss this in more detail shortly.) As you can see from Figure 2, the hieght of a cylindrical shell is \(f(x_i)\). The volume of the \(i^{th}\) cylindrical shell is therefore given by $$ΔV_i=2π\bar{x}_if(x_i)Δx.\tag{4}$$ To estimate the volume of the paraboloid, let's sum the volumes of all the cylindrical shells to get $$S_n=\sum_{i=1}^n2π\bar{x}_if(x_i)Δx.\tag{5}$$ When defining a definite integral, we always start with a sum of the form $$S_m=\sum_{i=1}^mg(x_i)Δx;\tag{6}$$ then, we take the limit of such a sum as \(m→∞\) to get $$\int_a^bg(x)dx=\lim_{m→∞}\sum_{i=1}^mg(x_i)Δx.$$ Figure 3: If \(a\) and \(c\) represents the semi-major and semi-minor axes of an ellipse, respectively, and if \(a=3\) and \(c=2\) then by rotating such an ellipse about an axis we can obtain an oblate spheroid. The problem with Equation (5) is that the term \(2π\bar{x}_if(\bar{x}_i)Δx\) isn't the same as the \(g(x_i)\) in Equation (6). We cannot define a function \(h(\bar{x}_i)\) or \(h(x_i)\) that we can set equal to \(2π\bar{x}_if(\bar{x}_i)Δx\). The term \(2π\bar{x}_if(\bar{x}_i)Δx\) requires two input values (namely, \(\bar{x}_i)\) and \(x_i\)) to specify its value whereas functions like \(g(x_i)\) in Equation (6) require only one input value (namely, \(x_i\)) to specify its value. Fortunately, there is a way around this problem. Recall that it does not matter whether we take a left-hand side Reimann sum (in which case, the height of the rectangle would be \(g(x_i)\)), a right-hand side Reimann sum (this is when the height of each rectangle is given by \(g(x_{i+1}\)), or a midpoint Reiman sum (when the height of a rectangle is given by \(g(\frac{x_i+x_{i+1}}{2})=g(\bar{x}_i)\)). (We shall not discuss the reasons why this is here; but if you do not understand why this is, I strongly encourage you to review the topic.) For similar reasons, we could replace the \(f(x_i)\) in Equation (5) with either \(f(x_{i+1}\) or \(f(\bar{x}_i)\); doing so will not change the limit of the sum. (Indeed, we could replace \(f(x_i)\) in Equation (5) with \(f(x_i*)\) (where \(x_i≤x_i*≤x_{i+1}\) and, although the Equation (5) would give a different approximation of the paraboloid, the limit of Equation (5) would remain the same. To understand why this is, it would be a good idea to review the concept of limits.) Swapping the \(f(x_i)\) in Equation (5) with \(f(\bar{x}_i)\), we get a different sum (which we'll specify by \(S_n'\)) given by $$S_n'=\sum_{i=1}^n2π\bar{x}_if(\bar{x}_i)Δx.\tag{7}$$ What's nice about Equation (7) is that the term \(2π\bar{x}_if(\bar{x}_i)Δx\) is expressed entirely in terms of the single variable \bar{x}_i\). Thus, Equation (7) is of the same form as Equation (6). If \(n→∞\) (which is to say, if the number of cylindrical shells within the paraboloid approaches infinity), then the sum \(S'_n\) will get closer and closer to equaling the exact volume of the paraboloid. Thus $$\lim_{n→∞}\sum_{i=1}^n2π\bar{x}_if(\bar{x}_i)Δx=\int_0^32πxf(x)dx.\tag{8}$$ To evaluate the integral in Equation (8), we need to find out what the function \(f(x)\) is. \(f(x)\) represents the height (which is to say, the \(y\)-value) associated with each rectangle on the interval \(Δx=3-0\). In other words, \(f(x)\) is the \(y\)-coordinate associated with each point along the quarter-ellipse in the first quadrant of the \(xy\)-plane illustrated in Figure 2. Recall that the equation \(\frac{x^2}{9}+\frac{y^2}{4}=1\) was used to graph each \((x,y)\) coordinate along the ellipse in Figure 1. If we restrict the domain of this function to values of \(x\) and \(y\) where \(0≤y≤2\), then the equation \(\frac{x^2}{9}+\frac{y^2}{4}=1\) could be used to graph the quarter-ellipse in the first quadrant of the \(xy\)-plane in Figure 1. Thus, for the aforementioned restrictions on the domain, the \(y\) in the equation, \(\frac{x^2}{9}+\frac{y^2}{4}=1\), specifies the \(y\)-coordinate of each point along the quarter-ellipse. It therefore also specifies the height of each rectangle under the quarter-ellipse. This means that \(f(x)=y(x)\). Using the equation \(\frac{x^2}{9}+\frac{y^2}{4}=1\), we can solve for \(f(x)=y(x)\): $$\frac{x^2}{9}+\frac{(f(x))^2}{4}=1$$ $$\frac{(f(x))^2}{4}=1-\frac{x^2}{9}$$ $$f(x)=\sqrt{4-\frac{4}{9}x^2}.\tag{9}$$ Substituting Equation (9) into the integral in Equation (8), we have $$\text{Volume of paraboloid}=\int_0^32πx\sqrt{4-\frac{4}{9}x^2}dx.\tag{10}$$ At this point, all of the hard work is done and we just need to solve the definite integral in equation (10) and then multiply our answer by \(2\) to get the volume of the oblate spheroid illustrated in Figure 3. We can solve the integral in Equation (10) by using \(u\)-substitution. If we let \(u=4-\frac{4}{9}x^2\), then $$\frac{du}{dx}=\frac{-8}{9}x$$ $$du=\frac{-8}{9}xdx$$ $$dx=\frac{-9}{8}\frac{1}{x}du.\tag{11}$$ Substituting \(u\) and Equation (11) into (10), we have $$\text{Volume of paraboloid}=\int_{?_1}^{?_2}(2πx)\biggl(\frac{-9}{8}\frac{1}{x}\biggr)u^{1/2}du$$ $$\text{Volume of paraboloid}=\frac{-9}{4}π\int_{?_1}^{?_2}u^{1/2}du.$$ When \(x=0\), \(u=4\) and when \(x=3\), \(u=4-\frac{4}{9}(3)^2=4-4=0\). Substituting the limits of integration into the integral above and solving the integral, we have $$\text{Volume of paraboloid}=\frac{-9}{4}\biggl[\frac{2}{3}u^{3/2}\biggr]_4^0$$ $$=\frac{9}{4}π(\frac{2}{3}(4)^{3/2})=\frac{3}{2}π(4)^{3/2}$$ $$=\frac{-9}{4}π\biggl[\frac{2}{3}u^{3/2}\biggr]_4^0=\frac{-3}{2}π\biggl[(4-\frac{4}{9}x^2)^{3/2}\biggr]_0^3$$ $$=\frac{-3}{2}π\biggr[(4-4)-(4-0)^{3/2}\biggr]=12π.$$ Thus we have shown that the volume of the paraboloid is \(12π\) units squared. Multiplying this result by \(2\), we find that the volume of this oblate spheroid is given by $$\text{Volume of oblate spheroid}=24π.\tag{12}$$ Source: https://www.gregschool.org/gregschoollessons/2017/10/22/volume-of-an-oblate-spheroid-rl852-8yer8 Newer PostQuasars Older PostOur Future as Cyborgs
CommonCrawl
Top PDF On certain finite linear groups of prime degree On certain finite linear groups of prime degree CHAPTER I INTRODUCTION In studying finite linear groups of fixed degree over the complex field, it is convenient to restrict attention to irreducible, unimodular, and quasiprimitive grou[r] The structure of infinite dimensional linear groups satisfying certain finiteness conditions The study of the subgroups of 𝐺𝐿(𝑉, 𝐹 ) in the case when 𝑉 is infinite dimensional over 𝐹 has been much more limited and normally requires some additional restrictions. The circumstances here are similar to those present in the early development of Infinite Group Theory. One approach there consisted in the application of finiteness conditions to the study of infinite groups. One such restriction that has enjoyed considerable atten- tion in linear groups is the notion of a finitary linear group. In the late 1980's, R.E. Phillips, J.I. Hall and others studied infinite dimensional lin- ear groups under finiteness conditions, namely finitary linear groups (see [34, 14, 32, 35, 15, 16])). Here 𝐺 is called finitary if, for each element 𝑔 ∈ 𝐺, the subspace 𝐶 𝑉 (𝑔) has finite codimension in 𝑉 ; the reader is Convergence and limits of linear representations of finite groups Unitary representations. Our primary motivation and model example is the view of infinite dimensional unitary representations into tracial von Neumann algebras as limits of finite dimensional unitary representations. By a finite dimensional unitary representation (of degree r), we mean a homomorphism κ : F r → U (n) of the free group on r generators into the unitary group U(n). Note that such representations can be given by the r-tuple {κ(γ i } r i=1 , where {γ i } r i=1 are the standard generators of the The Multiplicative Degree of Some Finite Groups In mathematics, there is a branch related with the study of uncertainty named probability. This probability can be applied to another field of mathematics such as group theory. The results obtained are very interesting since the calculation dealt with regular properties of elements of a certain set. On Galois groups of prime degree polynomials with complex roots proaches the infimum, the difference 𝑝 − 2𝑘 gets smaller, as required. Returning to the group-theoretic problem stated above (for degree 𝑛, not necessarily a prime), Jordan [10] showed that 𝐵(𝑛) = √ 𝑛 − 1 + 1 is a lower bound for the minimal degree. A substantial improvement of this bound is due to Bochert [3] who showed that 𝐵(𝑛) = 𝑛/8, and if 𝑛 > 216 then one has an even better bound, namely 𝐵(𝑛) = 𝑛/4. Proofs for the Jordan and Bochert estimates can be found also in Dixon & Mortimor [7], Theorem 3.3D and Theorem 5.4A, respectively. More recently, Liebeck and Saxl [11], using the classification of finite simple groups, have proved 𝐵(𝑛) = 𝑛/3. On metacyclic subgroups of finite groups Over years there has been considerable literature studying global properties of groups which are determined by the structure or embedding of their Sylow p-subgroups, where p is a prime which is going to be fixed. Most of these results go back to Burnside's p-nilpotency criterion stating that a group is p-nilpotent, i.e. it has a normal Hall p ′ -subgroup provided that a Sylow p-subgroup is in the centre of its normaliser. As a consequence, a group with cyclic Sylow p-subgroups is p-nilpotent if its order is coprime to p − 1. This result does not remain true for metacyclic Sylow p-subgroups as the alternating group of degree 5 shows. However, if the order of a group G is coprime to p 2 − 1 and its The precise value of commutativity degree in some finite groups Indeed for 𝑘𝑘 ≠ 0 , this formula also shows an upper bound for 𝐺𝐺 and does not determine the exact number of 𝑘𝑘(𝐺𝐺). Also, several results have been verified about conjugacy classes of subgroups of metacyclic 𝑝𝑝 -groups see [8,9,10]. For example, in [10, Theorem 1.3] it was shown that if 𝐺𝐺 is any finite split metacyclic 𝑝𝑝 -group for an odd prime 𝑝𝑝 , that is, 𝐺𝐺 = 𝐻𝐻 ⋉ 𝐾𝐾 for subgroups 𝐻𝐻 and 𝐾𝐾 , and if |𝐻𝐻| = 𝑝𝑝 𝛼𝛼 and |𝐾𝐾| = 𝑝𝑝 𝛼𝛼 +𝛽𝛽 , then there exist exactly On finite groups having a certain number of cyclic subgroups We summarize our notations. cl(a) denotes the conjugacy class of a in G, π(G) denotes the set of prime numbers dividing the order of G, ϕ(n) denotes the Euler function that counts the positive integers less than n that are relatively prime to n, F(G) denotes the subgroup generated by all normal nilpotent subgroups of G, O p (G) denotes the unique maximal normal p-subgroup of G, F p,q denotes A New Encoding Framework for Predicate Encryption with Non-Linear Structures in Prime Order Groups et al. [16, 1] also explicitly defines linearity in common variables of keys and ciphertexts as a new property for their security analysis. Also, the techniques suggested in [11, 1] assume that linearity in common variables and use them for their proofs, implicitly using the structural definition of pair encodings. As described in the overview of pair encoding, the pair encoding was not defined only by properties, but also required to have a certain structure which is linear in common variables. Our Solution. Our solution largely adopts the notion of the pair encoding framework. However, the pair encoding framework cannot properly describe non-linear common variables. Therefore, we improves the syntax of pair encoding. The most significant change in our framework is that we decompose variables used as exponents of public keys and master secret keys into two types hidden common variables and shared common variables to express non-linearity in PE schemes as follows: On nonsolvable groups whose prime degree graphs have four vertices and one triangle information see the survey paper [2].) In general, it seems that the prime degree graphs contain many edges and thus they should have many triangles, so one of the cases that would be interesting is to consider those finite groups whose prime degree graphs have a small number of triangles. In [4], the author studied finite groups whose prime degree graphs have no triangle. In particular, he proved that if ∆(G) has no triangle, then | ρ(G) | ≤ 5. He also obtained a complete classification of all finite groups whose prime degree graphs contain no triangle with five vertices. In [5], the author studied finite groups whose prime degree graphs have at most two triangles. In particular, in [5, Theorem A], he considered the case where ∆(G) has one triangle and proved that ∆(G) has at most six vertices and if ∆(G) has six vertices, then G ≃ P SL(2, 2 f ) × A, where A is abelian, | π(2 f − δ) | = 2, and | π(2 f + δ) | = 3 for some δ = ± 1 with f ≥ 10. Furthermore, if ∆(G) has five vertices, he described all possible cases for such a graph. Finite irreducible linear 2-groups of degree 4 Case 2: t ^ a. (By symmetry, of course, we may also assume that 5 a.) In this case, Q = {Br\G')r\ {B D where r- = (13) or r = (23); also, (5.25) is not necessarily satisfied for all G. We claim that Q may be assumed to be cyclic. To see this, suppose that Q is noncyclic; that is, ( 5 n G) D ( S n Hf^ > Ft. Then S n G*" contains one of F{i,j, 0,0,0,1) or F{i,j, 1,1,1,0) as maximal. Thus, as we saw above, B Cl G^ certainly occurs as the diagonal subgroup of a group in the Ust of Theorem 5.4.1. Since by hypothesis G is in that list, we have by Remark 5.4.2 that {B n Gy = B n G. In particular, B Ci G has a maximal V^4-submodule with centraJiser (a) in B. The faithful finite V4-submodules of B with such a maximal submodule were listed in Case 1; of these, we see from the orbit list given in the proof of Proposition 5.2.8 that only the F{i, 1,1,1, 0, 0) and F(0, 0, 0, 0,1,1,1) are normalised by r. For r = (13) and r = (23) in turn, we follow the procedure established in Case 1 (here, w may stiU be chosen as u^) for each G satisfying (5.25)- (5.27) and with one of the aforementioned V4-modules as diagonal subgroup. This wiU yield all linear isomorphisms of type I between such G and other groups in the Ust of Theorem 5.4.1. Details of these calculations are omitted: no new isomorphisms are obtained, and so from now on we assume that Q is cyclic. Certain finite abelian groups with the Redei $k$-property Proof. Let n be the number of the not necessarily distinct prime divisors of |G|. Let us denote the prime |A| by p. The factorization G = AB implies that |G| = |A||B|. In the special case n = 1 it follows that |G| = |A| = p and |B | = 1. Therefore G = A and B = {e}. Plainly hB i 6= G and so for the special case n = 1 the theorem is proved. For the remaining part of the proof we assume that n ≥ 2 and we start an induction on n. On the efficiency of finite groups The efficiency of direct products of groups, stimulated by questions asked by Wiegold in [30], has been studied by several authors; see for example [1], [4], [7], [16]. In this chapter we give general methods for proving that direct products of two or three groups possessing certain properties are efficient and also give some specific examples. The most general of these examples involve the family of simple groups P5L(2, p), for prime p > 5. 5L(2, p) is the group of two by two matrices having entries in Zp of determinant one. This group has only one invo­ lution, the central element, and factoring by the centre yields PSL{2, p). Both of these groups are perfect. 5T(2, p) has trivial Schur multiplier and PSL{2, p) has multiplier Cg, its covering group being 5L(2, p). Finite Groups with Certain Permutability Criteria Throughout this note, G denotes a finite group. The relationship between the proper- ties of the Sylow subgroups of a group G and its structure has been investigated by many authors. Starting from Gasch˝ utz and It˝ o ([10], Satz 5.7, p.436) who proved that a group G is solvable if all its minimal subgroups are normal. In 1970, Buckely [4] proved that a group of odd order is supersolvable if all its minimal subgroups are normal (a subgroup of prime order is called a minimal subgroup). Recall that a subgroup is said to be S-permutable in G if it permutes with all Sylow subgroup of G. This concept, as a generalization of normality, was introduced by Kegel [11] in 1962 and has been studied extensively in many notes. For example, Srinivasan [15] in 1980 obtained the supersolvability of G under the assumption that the maximal subgroups of all Sylow subgroups are S-permutable in G. In 2000, Ballester-Bolinches et al. [3] introduced the c-supplementation concept of a finite group: A subgroup H of a group G is said to be c-supplemented in G if there exists a subgroup K of G such that G = HK and H ∩ K ≤ H G , where H G = Core G (H) is the Prime power lie algebras and finite p groups Having said this however, it should be noted that the nilpotent n-dimensional Fp—Lie algebra which we associate to each isomorphism class of groups of order pn (p > n) whose derived subgroup has exponent dividing p (obtained by ignoring the T —action) is not, in general, the graded Lie algebra which arises from the filtration of such a group by the lower p-central series. This follows from the fact that for groups o f exponent p, the Fp—Lie algebra we associate is isomorphic to the Lie algebra given by the Campbell-Hausdorff formula and this Lie algebra is uniquely determined by the isomorphism type of the group. This is not the case for the graded construction however, since one always has non-isomorphic groups of order pn (p,n > 5) whose Fp —Lie algebras arising from the lower p-central series are isomorphic. To see that this is the case, consider the table of groups of order p5 given in [12] and in particular the groups (in the notation of the paper) ( 15) and </>io(l5)- These are both of exponent p, maximal class 4 and non-isoclinic (hence non-isomorphic), but from the presentations given there one can verify that the 5-dimensional graded Fp —Lie algebras arising from their lower central series are both isomorphic to the split extension of the 4-dimensional Abelian Fp —Lie algebra by a nilpotent linear map o f maximum nilpotency class 4 (for p > 5). By taking direct products of these two groups with an elementary Abelian p-group o f the appropriate order one sees that a similar situation holds for any n > 5. Groups of linear automata The paper is organized as follows. Firstly we recall main definitions concerning linear automata over modules. In this account we follow [2]. Then we introduce a special class of linear automata, so-called scalar automata. In such automata the module of inner states is equal to the module of letters and transition and output functions are the sums of multiplications by elements of the layer ring. We classify in Theorem 1 the groups of scalar automata. The proof is based on the technique presented in [3, Proposition 4.1] and developed in [4, Theorem 4.1] and [5, Proposition 1], where, in fact, groups of some scalar automata were calculated. As a corollary, we describe in Theorem 2 groups of linear automata over a finite field whose space of states is equal to this field. These results may be regarded as a contribution to the theory of self-similar groups ([6]). 2. Let R be a commutative ring with unit, R ∗ Finite groups as groups of automata with no cycles with exit Theorem 3. Let G be a group generated by (finite) automaton (with no cycles with exit) A = hX, Q, ϕ, λi over an alphabet X, P < S(X) and for every state q ∈ Q the permutation λ(q, ·) belongs to the group P. Then the group P ≀ G is generated by (finite) automaton (with no cycles with exit) over an alphabet X. On some invariants of finite groups Groups G with ω|G| = 1 are known as p-groups and are extensively studied, with some specific methods (see [2] and subsequent volumes of this monograph). They have many interesting properties. For example, they are nilpotent groups and every nilpotent group is a direct product of p-groups with coprime orders. Presentations of linear groups I n t h i s t h e s i s , we i n v e s t i g a t e t h e d e f i c i e n c y o f t h e groups P S L ( 2 , p ^ ) . J .A . Todd gave p r e s e n t a t i o n s f o r PSL(2,p^) which use l a r g e numbers o f g e n e r a t o r s and r e l a t i o n s ("A second n o t e on t h e l i n e a r f r a c t i o n a l g r o u p . " J . London Math, Soc. 2 (1936) 103-107). S t a r t i n g w i t h t h e s e , we o b t a i n , a t b e s t , d e f i c i e n c y -1 p r e s e n t a t ­ io n s f o r PSL(2,2^) (5 S L ( 2 ,2 ^ )) and d e f i c i e n c y -6 p r e s e n t a t i o n s f o r P S L (2 ,p ^ ) , p an odd p r im e . I f p^ e -l(m od 4 ) , t h e l a t t e r can On varieties of metabelian groups of prime-power exponent varieties of metabelian groups). The basic (but in its full generality entirely h o p e l e s s ) problem in this theory is to describe all metabelian varieties and the lattice lat(AA) they form, and indeed most of the results obtained so far concern aspects of this problera. Top PDF Finite irreducible linear 2-groups of degree 4 (+10000 docs) Top PDF Prime power lie algebras and finite p groups (+10000 docs) Top PDF The precise value of commutativity degree in some finite groups (+10000 docs) Top PDF The Multiplicative Degree of Some Finite Groups (+10000 docs) Top PDF On finite groups having a certain number of cyclic subgroups (+10000 docs) Top PDF Sets of prime power order generators of finite groups (+10000 docs) Top PDF On subgroups of finite exponent in groups (+10000 docs) Top PDF \(\mathbf{S}\)-Embedded subgroups in finite groups (+10000 docs) Top PDF On \(S\)-quasinormally embedded subgroups of finite groups (+10000 docs) Top PDF Certain finite abelian groups with the Redei $k$-property (+10000 docs) The derived category of representations of the special linear group of degree two over a finite field Primitive irreducible linear groups
CommonCrawl
White Gaussian Noise Spectrum and Power White Gaussian noise has constant power spectral density $N_0/2$. I know that the area under the power spectral density curve between two points gives the power of the signal between these two points. If I want to know the power of a certain frequency in the signal (not in a range of frequencies), can we say that the power of each frequency in the signal is exactly $N_0/2$? The total of power of additive white Gaussian noise is infinity, what does this mean? Is it reasonable to assume that the noise added to the signal have an infinite power? noise white NohaNoha If I want to know the power of a certain frequency in the signal (not in a range of frequencies), can we say that the power of each frequency in the signal is exactly No/2? No, BUT: You mean the right thing, you just say it wrongly: The Power Spectral Density is constant – a single frequency doesn't have any power; it has a "power per bandwidth"! To arrive at a power, you need to integrate the density over a non-zero mass of frequencies. (This is kind of an important distinction to make – only infinitely long periodic signals, e.g. sine waves, have power at a single frequency; everything else has a "power distributed over frequencies".) Yes. Notice that you're never dealing with a truly white Gaussian noise in continuous-time systems (luckily for the universe, I might add); it's always approximately white for some bandwidth. Everything else is physically impossible – but rarely matters. Example: the thermal noise you can measure over a resistor is the classical example of white Gaussian noise in systems. However, it's not really white – the power density decreases at very high frequencies. But that's totally irrelevant to your observation – your measurement doesn't go into the terahertzes. In time-discrete systems, things look different: for a sampled time-continuous stochastic signal (noise) to be white, it's sufficient that the original time-continuous signal had a constant PSD over a bandwidth. So, there's no physical problem in the time-continuous world. Since a discrete signal is just a sequence of numbers, there's no concern for "physicality" anyways. Marcus MüllerMarcus Müller $\begingroup$ The power spectral density of AWGN is constant at No/2 or No? What I read is that it is constant at No/2, while the definition of No is the amount of power per unit bandwidth watt/Hz. Why No is divided by two? If we don't divide by two, then the value of the constant PSD is No (Watt/Hz), which when multiplied by the total bandwidth will give the total power. $\endgroup$ – Noha Dec 8 '20 at 12:05 $\begingroup$ I need to understand the following: why we can't define power for a single frequency component, unless there is a sinusoid at that frequency? infinitely long periodic signals, e.g. sine waves, have power at a single frequency, but any signal consists of a range of frequencies in the form of sinusoidal signals. Each sinusoid has a certain duration in the signal. Infinitely long sinusoidal signals have power, and also finite sinusoidal signals have power. $\endgroup$ – Noha Dec 8 '20 at 12:49 $\begingroup$ $N_0$ vs $N_0/2$: complex or real signals; depends on your definition of bandwidth. So, watch out for how the texts define bandwidth. $\endgroup$ – Marcus Müller Dec 8 '20 at 13:30 $\begingroup$ We can define power at a single frequency. It's 0 for white noise. It's because an integral over a single point of a bounded function is always 0, see my comments under Mark's answer. $\endgroup$ – Marcus Müller Dec 8 '20 at 13:30 $\begingroup$ I understand that mathematically, but I can not imagine that. A sinusoid always has power even if finite in duration, and of course every frequency is represented as a sinusoid with certain duration in the signal. $\endgroup$ – Noha Dec 8 '20 at 14:08 White noise is a conceptual signal more than real world signal. In the context of estimation it is the signal which can't be estimated based on its past. In the context of Frequency domain it is the one with constant value (On average) for any of its bins. Now, for continuous signals, it implies it has infinite energy, hence it is only a mathematical concept. As no such thing in real life. RoyiRoyi Yes. Indeed. Remark The way I interpret your question is: "What's the the value of the PSD (Which you refer as power) frequency". The answer to that is that many people dealing with White Noise try to understand if they can intuitively think it is built by infinite sum of Harmonic Signals. Which actually the definition of White Noise: It requires all basis functions in order to build it. Each with the same power (On average). In case you'd see such signal it will indeed have infinite power. Yet you can only encounter Band Limited White Noise which is white within the frequencies it was sampled. See How to Simulate AWGN (Additive White Gaussian Noise) in Communication Systems for Specific Bandwidth. $\begingroup$ 1. Nope, indeed not: at any frequency $f_0$ of white noise, the power is 0, since $$\int\limits_{f_0}^{f_0} G(f)\,\mathrm df=0$$ for all bounded functions $G(f)$, so especially for a constant value PSD $G(f)=N_0/2$. $\endgroup$ – Marcus Müller Dec 5 '20 at 10:22 $\begingroup$ I'll allow myself to disagree there: they specifically say "(not in a range of frequencies)"! $\endgroup$ – Marcus Müller Dec 5 '20 at 11:29 $\begingroup$ @Mark: "... can we say that the power of each frequency in the signal is exactly $𝑁_0/2$?" The only answer to this is really "no", because that question is based on a misunderstanding on what power spectral density means. You can't define power for a single frequency component, unless there is a sinusoid at that frequency, which corresponds to a Dirac delta impulse in the power spectrum. But that is not the case for white noise. $\endgroup$ – Matt L. Dec 5 '20 at 11:35 $\begingroup$ @Mark really, no! The value of the PSD is not a power, and the difference between it being a power density or a power is really important, especially considering how the question was phrased with, referring to power on a single exact frequency. Any single exact frequency of white noise has exactly 0 power! $\endgroup$ – Marcus Müller Dec 5 '20 at 21:10 $\begingroup$ @Mark: The value of the PSD is $N_0/2$ at each frequency, correct. However, this does absolutely not mean that the power at each frequency equals $N_0/2$. No matter how many times you repeat that, it remains completely wrong. $\endgroup$ – Matt L. Dec 6 '20 at 12:41 Not the answer you're looking for? Browse other questions tagged noise white or ask your own question. How to Simulate AWGN (Additive White Gaussian Noise) in Communication Systems for Specific Bandwidth Variance of White Gaussian Noise white noise filtering Sampling of band-limited white noise On coloured Gaussian noise How the white and colored noise differ in time domain Why does White Noise in images imply noise in adjacent pixels are independent? Variance of a signal Discrete-time sampling of filtered white noise White gaussian noise analysis deduction
CommonCrawl
Bitumen and asphaltene derived nanoporous carbon and nickel oxide/carbon composites for supercapacitor electrodes Effect of porosity enhancing agents on the electrochemical performance of high-energy ultracapacitor electrodes derived from peanut shell waste N. F. Sylla, N. M. Ndiaye, … N. Manyala High performance hierarchical porous carbon derived from distinctive plant tissue for supercapacitor Jinxiao Li, Yang Gao, … Zhaocai Teng Graphene nanosheets derived from plastic waste for the application of DSSCs and supercapacitors Sandeep Pandey, Manoj Karakoti, … Nanda Gopal Sahoo Electrochemical investigation of carbon paper/ZnO nanocomposite electrodes for capacitive anion capturing Ebrahim Chalangar, Emma M. Björk & Håkan Pettersson Ultrasonication-mediated nitrogen-doped multiwalled carbon nanotubes involving carboxy methylcellulose composite for solid-state supercapacitor applications Praveen Kumar Basivi, Sivalingam Ramesh, … Handol Lee Natural biomass derived hard carbon and activated carbons as electrochemical supercapacitor electrodes Sourav Ghosh, Ravichandran Santhosh, … Andrews Nirmala Grace Combined effect of nitrogen-doped functional groups and porosity of porous carbons on electrochemical performance of supercapacitors Anna Ilnicka, Malgorzata Skorupska, … Jerzy P. Lukaszewicz Development of High-Performance Supercapacitor based on a Novel Controllable Green Synthesis for 3D Nitrogen Doped Graphene Noha A. Elessawy, J. El Nady, … A. B. Kashyout Hemicellulosa-derived Arenga pinnata bunches as free-standing carbon nanofiber membranes for electrode material supercapacitors Rakhmawati Farma, Irma Apriyani, … Apriwandi Apriwandi Dinesh Mishra1, Rufan Zhou1, Md. Mehadi Hassan1, Jinguang Hu1, Ian Gates1, Nader Mahinpey1 & Qingye Lu1 Scientific Reports volume 12, Article number: 4095 (2022) Cite this article Asphaltenes from bitumen are abundant resource to be transformed into carbon as promising supercapacitor electrodes, while there is a lack of understanding the impact from different fractions of bitumen and asphaltenes, as well as the presence of transition metals. Here, nanoporous carbon was synthesized from bitumen, hexane-insoluble asphaltenes and N,N-dimethylformamide (DMF)-fractionated asphaltenes by using Mg(OH)2 nanoplates as the template with in-situ KOH activation, and used as an supercapacitor electrode material. All of the carbon exhibited large surface area (1500–2200 m2 g−1) with a distribution of micro and mesopores except for that derived from the DMF-soluble asphaltenes. The pyrolysis of asphaltenes resulted in the formation of nickel oxide/carbon composite (NiO/C), which demonstrated high capacitance of 380 F g−1 at 1 A g−1 discharge current resulting from the pseudocapacitance of NiO and the electrochemical double layer capacitance of the carbon. The NiO/C composite obtained from the DMF-insoluble portion had low NiO content which led to lower capacitance. Meanwhile, the specific capacitance of NiO/C composite from the DMF-soluble part was lower than the unfractionated asphaltene due to the higher NiO content resulting in lower conductivity. Therefore asphaltenes derived from nickel-rich crude bitumen is suitable for the synthesis of nanoporous NiO/C composite material with high capacitance. The demand for renewable energy has grown rapidly in recent years due to the rapid decline of fossil fuels and growing concerns about environmental pollution. Meanwhile the demand for sustainable and clean energy is becoming more critical owing to the emergence of various electronic devices1,2,3. Therefore, the search for next generation energy storage materials and devices is very important. Supercapacitors have received a great deal of attention from the research community as energy storage devices due to their low cost, high power density, and high efficiency4,5,6,7. A supercapacitor consists of two electrodes immersed in an electrolyte and separated by an ion conducting but electron insulating membrane. The mechanism of charge storage in supercapacitors can be non-faradaic (electrochemical capacitor) or faradaic (pseudocapacitor)1. Various carbon materials with high surface area, high conductivity, morphology, size, and pore size distribution can be synthesized at large scale. In general, pure carbon materials such as activated carbon, graphene nanosheets, nanotubes, and nanocages exhibit non-faradaic double-layer energy storage mechanism, i.e. there is no electron transfer at the electrode electrolyte interface and energy storage is electrostatic in nature8. Meanwhile, fast reversible redox reactions occur in faradaic pseudocapacitors during the charge–discharge process9. Among pseudocapacitors, transition metal oxides or transition metal hydrides are mostly used due to their high theoretical specific capacitance and fast redox reactions on their surfaces10,11. Noble metal oxides such as RuO2 and IrO2 have been studied as electrode materials in the past12,13. However, the use of noble metal oxides for supercapacitors is limited due to their high cost. Instead, the use of more abundant and cheaper transition metal oxides has been explored, which has made it feasible to design supercapacitor materials with high theoretical capacitance. For example, porous nanostructured NiO and its composites have been studied as electrodes for supercapacitor because of their low cost and high theoretical capacity14,15,16. However, NiO has poor electroconductivity and therefore low charge–discharge rate and reversibility. Taking the benefit of the higher conductivity of carbon materials and high theoretical capacitance of NiO, alternatives have been explored by combining NiO with activated carbon or carbon black16,17,18. Bitumen is an abundant natural resource in Canada which has been widely used as raw material for petroleum products. Canada alone produced 2.8 million barrels per day of crude bitumen in 201719. Unlike conventional crude oil, bitumen is rich in other elements such as nitrogen, sulphur, and heavy metals. Additionally, asphaltenes, the insoluble component obtained from partial upgrading of bitumen, are also cheap and abundant carbon-rich resource. The molecular complexity of bitumen can be reduced by fractionating it using different solvents using ASTM standards20. Based on this method, bitumen can be fractionated into saturates, aromatics, resins, and asphaltenes. Asphaltenes are a solubility class that is soluble in light aromatics such as benzene and toluene but is insoluble in light paraffins such as the n-pentane or heptane21. Recently, there has been a surge in the synthesis of novel carbon materials such as nanosheet, nanoporous carbon, etc. from fossil fuels including pitch, coal, and asphaltenes22,23,24,25. Bitumen and asphaltenes are rich in polycyclic aromatic hydrocarbons which can be transformed into highly ordered carbon nanostructures including nanotubes and nanosheets. By using a melamine sponge template and asphaltene extracted from crude oil as the precursor26, or asphaltene from coal and an in-situ sheet-structure-directing agent from urea thermal polymerization27, the interconnected porous carbon were derived with an electrochemical capacitance of 200 F g−1 at 5 mV s−126, or the porous carbon nanosheet with a graphitized-like ribbon structure with 282.9 F g−1 at 100 A g−127 in a three-electrode test, respectively. Although bitumen and asphaltenes are very promising raw materials for carbon supercapacitors, there is no detailed study relating the physical and electrochemical properties of nanocarbon obtained from different fractions of bitumen. Furthermore, due to the presence of transition metals in bitumen, it can be directly used to synthesize transition metal oxide–carbon composites (TMO/C) which are known to exhibit superior performance as supercapacitors due to high conductivity of carbonaceous material and high pseudo capacitance of TMOs28. Various kinds of two-dimensional (2D) materials have been used to assist in the formation of planar carbon nanosheets. Some examples of such materials which provide a guiding surface for the formation of carbon nanostructures are montmorillonite clay, Zn(OH)2 nanosheets, Mg(OH)2 nanoplates, MoS2 nanosheets, amino functionalized graphene oxide, NaCl, Na2SiO3, vermiculite, etc25,29,30,31,32,33,34. We herein report a Mg(OH)2 nanoplate template guided synthesis of porous carbon nanomaterials using bitumen and asphaltene fractionated from the same bitumen and their in-situ KOH activation. Mg(OH)2 nanoplates were chosen as template due to its cost effectiveness, simple preparation, and overall good performance of the carbon nanostructures prepared on Mg(OH)2 substrate. Asphaltenes were fractionated from the bitumen by precipitation using hexane. In addition, the asphaltene obtained was further partitioned into two fractions using N,N-dimethylformamide (DMF) as the solvent. The nanoporous carbon formed presents high surface area and a distribution of micro and mesopores which results in high conductivity, specific capacitance, and retention. The asphaltene fraction obtained from nickel complexes-containing bitumen led to the formation of NiO nanoparticles upon pyrolysis. The NiO/C composite obtained from asphaltenes exhibited the highest capacitance. The specific capacitance of the NiO/C composites obtained from DMF fractionated asphaltene was also measured. Interestingly, the capacitance decreased when the asphaltenes were fractionated using DMF. There was a significant decrease in the capacitance of NiO/C composite obtained from DMF insoluble fraction of asphaltenes, which was ascribed to the lower NiO content after DMF treatment. The NiO/C composite obtained from DMF soluble fraction of asphaltenes was higher than the insoluble fraction but lower than unfractionated asphaltenes. The proposed rationale for lower capacitance than unfractionated asphaltene is the higher Ni content in DMF soluble fraction of asphaltene resulted in NiO/C composites with lower conductivity. Toluene (anhydrous, 99.8%), n-hexane (99%), N,N-dimethylformamide (ACS reagent, ≥ 99.8%) and hydrochloric acid (ACS reagent, 37%) were purchased from Sigma Aldrich and used as received. Magnesium chloride hexahydrate (crystalline), sodium hydroxide (pellets, 98%), and potassium hydroxide (pellets, 85%) were purchased from Fisher Scientific. Oil sand sample was provided from an Alberta oil sand company. All solutions were prepared in deionized water (resistivity ≥ 18.2 MΩ cm). Synthesis of nanoporous carbon Bitumen was extracted from the Alberta oil sand sample using toluene as the solvent. Asphaltenes were obtained through precipitating the bitumen in toluene solution with hexane. Asphaltenes were further separated into DMF-insoluble and DMF-soluble fractions by being dissolved in DMF and followed by filtration. Mg(OH)2 was prepared by slow reaction of MgCl2 and NaOH solution as found in the literature35. The precipitated Mg(OH)2 was filtered and washed with DI water and dried. For the preparation of porous nanocarbon, 2 g of bitumen or asphaltenes was mixed with 4 g of Mg(OH)2 and 8 g of KOH. The mixture was transferred to a high temperature crucible and placed inside the tube furnace under N2 atmosphere (flow rate 300 mL min−1). The sample was heated to 300 °C at the rate of 5 °C min−1 in N2 atmosphere and kept for 30 min. Finally, the temperature was raised to 800 °C at the rate of 5 °C min−1 and kept at that condition for another 1 h. After the completion of the reaction, the sample was cooled to room temperature and washed with HCl followed by DI water. The nanoporous carbon samples obtained from bitumen were labeled as BCNS and the nanoporous carbon obtained from hexane precipitated asphaltenes were labeled as ACNS1. The nanoporous carbons from DMF-insoluble and DMF-soluble fractions of asphaltenes were labeled as ACNS2 and ACNS3, respectively. Material characterization Nanoporous carbons were characterized by scanning electron microscope (SEM), Fourier-transform infrared spectroscopy (FTIR), surface area analysis, pore size distribution, and X-ray powder diffraction (XRD) analysis. The characteristic peaks and bands were acquired by FTIR with an ATR sampling accessory (Perkin Elmer 400 FT-IR). 32 scans were performed from 500 to 4000 cm−1 to acquire the FTIR spectra. The XRD spectra were obtained by using a Rigaku multiplex X-ray diffractometer with a Cu X-ray source which was operated at 40 kV voltage and 40 mA current. A Micrometric ASAP 20 surface analyzer was used to measure the surface area of the samples by using the Branauer-Emmett-Teller (BET) method (N2 gas adsorption–desorption). Using the same equipment, pore size distributions were calculated by Barrett-Joyner-Halenda (BJH) formalisms using desorption isotherms. All the samples were degassed for 6 h at 300 °C prior to measurements. SEM images were acquired using Quanta FEG 250 field emission scanning electron microscope. All measurements were carried out under high vacuum at either 2.5 or 5 kV. Electrochemical measurement The working electrode was fabricated by mixing 90% nanoporous carbon or NiO/carbon composite and 10% PTFE in 2-propanol. The mixture was sonicated for 30 min to form a homogeneous mixture which was then loaded on a 1 × 1 cm2 nickel foam current collector. About 1 mg of carbon was loaded on each nickel foam. After evaporating the solvent, the nickel foam with carbon was further dried at 95 °C in an oven for 1 h. For the three-electrode test, 6 M KOH was used as the electrolyte, a Pt plate as the counter electrode, and a Ag/AgCl electrode as the reference electrode. All the electrochemical tests were performed at 25 °C. The working electrode was tested by cyclic voltammetry (CV) and galvanostatic charge discharge using PARSTAT 4000A electrochem station. Electrochemical impedance spectra were acquired between 105 and 0.01 kHz using the same instrument. From the CV curve, the specific capacitance \({C}_{sp}\) of the carbon electrode under the three-electrode system was calculated by Eq. (1): $${C}_{sp}=\frac{\int Idv}{mv\Delta V}$$ where \(\int Idv\) is the integrated area of the CV curve, \(m\) is the mass of the electrode material, \(v\) is the potential scanning rate (V s−1), and \(\Delta V\) is the potential window of the CV. From charge-discharge experiments, the specific capacitance \({C}_{sp}\) of the carbon electrode under the three-electrode system was determined by Eq. (2): $${C}_{sp}=\frac{I\Delta t}{m\Delta V}$$ where \(I\) is the applied current, \(\Delta t\) is the discharge time, \(m\) is the mass of the electrode material and \(\Delta V\) is the potential. As shown in Fig. 1A, the bitumen samples were fractionated into hexane-soluble maltenes and hexane-insoluble asphaltenes and the asphaltenes were subsequently partitioned into DMF soluble and insoluble fractions. The nanoporous carbon and NiO/C composites, i.e., BCNS1, ACNS1, ACNS2, ACNS3 obtained from bitumen, asphaltene, DMF-soluble and DMF-insoluble asphaltenes, respectively, were characterized using SEM, FTIR, XRD, BET surface area analysis, and pore size distribution analysis. The SEM images of the four samples are shown in Fig. 1B–E, respectively. As shown in Fig. 1B, BCNS sample had flaky appearance due to sheet-like structures, whereas ACNS1 and ACNS2 samples (Fig. 1C,D) looked spongy with large number of pores on the surface. Meanwhile, ACNS3 sample looked very compact with large pore sizes (Fig. 1E). This also accounts for the rather small surface area (discussed below) of ACNS3 sample compared to the others. Such significant difference of sample surface morphology structure indicates the contributions of different oil fractions in forming the carbon network. Asphaltenes contain high aromaticity and are easy to polymerize or cross link for preparing networked carbon materials or graphitic carbon structure. The existence of hexane-soluble portion probably weakens interactions among asphaltenes and helps the formation of flake carbon structures from bitumen precursor. The interactions of different portions in asphaltenes also lead to different crosslinking degree and pore sizes during the pyrolysis. The FTIR spectra of the carbon and NiO/C composites are shown in Fig. 2A. Carbon nanomaterials are good absorbers of radiation and the FTIR spectrum of these materials could include lots of noise. Weak peaks corresponding to C-O bond stretching were observed in BCNS samples but were not found in ACNS samples. This means the existence of maltene portion in bitumen precursor led to trace C–O in the final carbon structure and thus different surface properties for BCNS to ACNS samples. C–H bands were absent in both BCNS and ACNS samples. Almost featureless FTIR spectra indicate the absence of functional groups on the particle surface. This implies that active functional groups such as nitrogen and sulfur were eliminated from the carbon structure during KOH etching activation26. XRD spectra of BCNS and ACNS samples are shown in Fig. 2B. A broad peak around 2θ = 23° and weak peak around 2θ = 46° confirmed the presence of graphitic material in all the samples. However, in the ACNS samples, five distinct peaks corresponding to NiO were identified in addition to the broad carbon peaks, which suggests the formation of NiO/C composites in ACNS1, ACNS2, and ACNS3 samples. This also indicates the strong binding of Ni to the asphaltenes precipitated from hexane and thus Ni was not removed during the further processing of asphaltenes. Such NiO existence in carbon was found out later to be crucial for charge storage as supercapacitor electrode materials. The peak corresponding to NiO(200) plane overlapped with C(100) plane. The Scherrer equation was used to determine the crystallite size of NiO using the XRD data. The calculated crystallite sizes of NiO for ACNS1, ACNS2, ACNS3 samples were 25.7 ± 2.1 nm, 25.3 ± 1.3 nm, and 30.8 ± 2.2 nm, respectively. The fractionation by polar solvent DMF will lead to different portions of asphaltenes and content of Ni in the precursors. The increase of the size of NiO particles in the DMF extracted ACNS3 sample could be due to higher Ni content in the DMF-soluble fraction. (A) Flow diagram representing the extraction of asphaltene from bitumen using hexane and subsequent partitioning of asphaltenes into soluble and insoluble components using N,N-dimethylformamide (DMF). SEM images of (B) BCNS1, (C) ACNS1, (D) ACNS2, and (E) ACNS3 carbon materials samples obtained from bitumen, asphaltenes, DMF-soluble and DMF-insoluble asphaltenes, respectively. The scale bar is 1 μm. (A) FTIR spectra (B) XRD spectra of BCNS, ACNS1, ACNS2, and ACNS3. Figure 3A,B show the N2 adsorption–desorption isotherms and the pore size distributions of the nanoporous carbon samples. The surface area and pore volumes of the nanoporous carbon materials are listed in Table 1. The adsorption–desorption isotherms indicate there is a distribution of micro and mesopores. The hysteresis loop was seen in all samples indicating multilayer adsorption and capillary condensation in the mesoporous structure24. As shown in Table 1, BET surface area was 2117 m2 g−1 for BCNS, 1594 m2 g−1 for ACNS1, 1589.5 m2 g−1 for ACNS2, and 222 m2 g−1 for ACNS3. Despite the larger surface area of BCNS, the pore size distribution showed abundant mesopores in the BCNS whereas the pore size distribution was much narrower and in the micropore region for ACNS1 and ACNS2 samples. In general, abundant micropores are associated with higher specific capacitance of the carbon materials36. The lower surface area (222 m2 g−1) of ACNS3 sample is comparable to the surface area of Ni/C composite prepared by carbonization of Ni-phthalocyanine complexes reported in the literature37. Polar solvent such as DMF has been used in the past to extract organometallic complexes (e.g. vanadium or Ni-porphyrin complex) from asphaltenes38,39. Thus, the DMF extract is considerably rich in metallic content which results in Ni/C composite with higher Ni and lower carbon concentration upon carbonization. The lower adsorption of ACNS3 and absence of abundant pores in ACNS3 suggests that the dominant charge storage mechanism in this sample will be pseudocapacitance and not an electrochemical double layer capacitance (EDLC) mechanism such as is found in carbon rich materials. (A) N2 adsorption–desorption isotherms and (B) pore size distribution of BCNS, ACNS1, ACNS2, and ACNS3. Table 1 Physical properties of BCNS, ACNS1, ACNS2, and ACNS3. Figure 4A shows the CV curves of ACNS1 at the scan rates of 10, 25, 50, 100, and 200 mV s−1 over the potential range 0 to − 1 V. The large current response and quasi rectangular shape suggests reversible electrochemical double layer capacitance whereas the slight redox peaks at − 0.2 V reveals the pseudocapacitive behavior of the ACNS1 sample which is due to NiO. Galvanostatic charge–discharge of the ACNS1 at current densities 1, 2.5, 5, 10, and 20 A g−1 are shown in Fig. 4B. The galvanostatic charge–discharge curves deviate from symmetry, which implies that the supercapacitive behavior of ACNS1 resulted from both pseudocapacitance and EDLC. Charging–discharging times were longest at 1 A g−1 and decreased as the current was increased. The CV and galvanostatic charge–discharge curves of BCNS, ACNS2, and ACNS3 are shown Fig. 5. The CV curves of these electrodes were similar in shape to the ACNS1 electrode but had lower current values. The galvanostatic charge discharge curves of BCNS electrode was nearly symmetric indicating dominant EDLC phenomenon. On the other hand, the charge-discharge curves of ACNS2 and ACNS 3 were similar in shape to the ACNS1 electrode due to the pseudocapacitance along with EDLC. For comparison, the CV and galvanostatic charge–discharge curves of reduced graphene oxide (rGO) were also measured (Fig. 5). The gravimetric capacitances obtained from CV and galvanostatic charge–discharge curves of BCNS, ACNS1, ACNS2, ACNS3, and reduced GO are shown in Fig. 4C,D, respectively. The data show that ACNS1 had the highest capacitance compared to the other samples. The gravimetric capacitance of ACNS1 is 359 F g−1 at potential scan rate 10 mV s−1. ACNS3 had specific capacitance of 365 F g−1 at the same scan rate but it is lower than ACNS1 at higher scan rates. Similarly, gravimetric capacitance measured from galvanostatic charge–discharge measurements were highest for ACNS1 as shown in Fig. 4D. The calculated gravimetric capacitance from GCD measurement at 1 A g−1 was 380 F g−1. Capacitances of BCNS, ACNS2, and reduce GO were lower than that of ACNS1 and ACNS3. The higher specific capacitance \({C}_{sp}\) for ACNS1 and ACNS3 is due to the pseudocapacitance of NiO combined with the EDLC of carbon. For ACNS2 and ACNS3, which are prepared from the DMF insoluble and DMF soluble fractions of asphaltene, respectively, the change of the NiO/C ratios in these samples during the fractionation process led to the decrease in capacitance. The Ni content decreased in the DMF insoluble fraction which led to NiO/C composite with lower NiO content. Thus, the pseudocapacitive contribution was decreased leading to overall decrease of the \({C}_{sp}\) of ACNS2. Meanwhile, the DMF soluble fraction, which was rich in Ni, formed NiO/C composite with higher NiO content and the capacitance increased again in ACNS3. However, the capacitance was lower than ACNS1 which may be due to the increased resistance of the NiO component and was further discussed with the following impedance measurement results (Fig. 6). The effect of NiO content in NiO/C composite on the capacitance has been reported in prior studies. For example, Lota and coworkers prepared NiO-activated carbon composites with three different ratios: 34% NiO and 66% activated carbon, 17% NiO and 83% activated carbon, and 7% NiO and 93% activated carbon. Their results indicated that the low amount of NiO (7%) resulted in the highest capacitance40. Moreover, the smaller NiO crystal size in ACNS1 could be beneficial since it can provide higher specific surface area for charge storage. It should also be noted that most of the literature reported also apply 5–10% conductive carbon black to improve the conductivity of the electrodes23,41. In this work, we prepared electrodes without adding carbon black. Table 2 summarizes the capacitance of various carbon and carbon-composite materials compared to the one reported in this study, indicating very comparable performance achieved in this work. (A) CV and (B) galvanostatic charge–discharge of ANS1 (C) Specific capacitance vs scan rate of all CNS and reduced GO (rGO) (D) specific capacitance vs charge–discharge current of all CNS and rGO. CV and galvanostatic charge discharge of (A,B) rGO, (C,D) BCNS (E,F) ACNS2, and (G,H) ACNS3, respectively. (A) Electrochemical impedance spectroscopy of rGO, BCNS, ACNS1, ACNS2, and ACNS3 electrodes, and (B) cyclic performance of ACNS1 electrode at 5 A g−1. Table 2 Capacitive performance of different carbon and NiO based materials. Figure 6A shows the Nyquist plots of the BCNS, ACNS1, ACNS2, ACNS3 electrodes along with reduced GO electrode. All of the electrodes have low electrode resistance as indicated by the small value of the x-intercept. The curve for reduced GO has very steep curve at low frequency which is indicative of a good EDLC behavior. A similar trend is observed for the BCNS electrode but it has a curve with lower slope. For the ACNS electrodes, a semicircle with diameter increasing in the order ACNS1 < ACNS2 < ACNS3 is observed indicative of interfacial resistance due to NiO. The steep curve of the diffuse layer for ACNS1 is comparable to reduced GO while the slopes of the curve are lower for ACNS2 and ACNS3. To test the stability of the electrode, cycling tests were conducted galvanostatically between 0 and − 1 V. Figure 6B shows the cyclic stability of the ACNS1 electrode during the galvanostatic charge–discharge experiment at the current density of 5 A g−1 over 1000 cycles. As shown, the electrode retains ~ 85% capacitance after 1000 cycles thus exhibiting good cyclic stability. Nanoporous carbon and NiO/carbon composite materials were prepared directly from bitumen and asphaltenes, respectively. The as prepared carbon and NiO/carbon composite have large surface area with abundant pores for ion adsorption and transport. Bitumen derived nanoporous carbon had capacitive performance which was comparable to chemically reduced GO. The charge storage in BCNS is through the EDLC mechanism. Asphaltene derived NiO/C composite electrodes on the other hand exhibit enhanced capacitance due to the pseudocpacitive behavior of the NiO present in the nanoporous NiO/carbon composite. High theoretical capacitance of NiO due to fast reversible redox reactions along with high conductance and EDLC behaviour of activated nanoporous carbon which makes it suitable for supercapacitor electrode material. The capacitance of the asphaltene derived NiO/C electrodes without adding conductive carbon black is comparable to most of the carbon-based supercapacitors reported in the literature which also use conductive carbon for enhanced conductivity. This study suggests that asphaltene derived from crude bitumen which is rich in nickel is suitable for the synthesis of activated nanoporous NiO/C composite material with high capacitive performance and cycling stability. Salanne, M. et al. Efficient storage mechanisms for building better supercapacitors. Nat. Energy 1, 16070. https://doi.org/10.1038/nenergy.2016.70 (2016). Article CAS ADS Google Scholar Frackowiak, E. & Béguin, F. Carbon materials for the electrochemical storage of energy in capacitors. Carbon 39, 937–950. https://doi.org/10.1016/S0008-6223(00)00183-4 (2001). Choi, N.-S. et al. Challenges facing lithium batteries and electrical double-layer capacitors. Angew. Chem. Int. Ed. 51, 9994–10024. https://doi.org/10.1002/anie.201201429 (2012). Ke, Q. & Wang, J. Graphene-based materials for supercapacitor electrodes—a review. J. Materiom. 2, 37–54. https://doi.org/10.1016/j.jmat.2016.01.001 (2016). Huang, S., Zhu, X., Sarkar, S. & Zhao, Y. Challenges and opportunities for supercapacitors. APL Mater. 7, 100901. https://doi.org/10.1063/1.5116146 (2019). Li, Q. et al. A review of supercapacitors based on graphene and redox-active organic materials. Materials 12, 703 (2019). El-Kady, M. F. et al. Engineering three-dimensional hybrid supercapacitors and microsupercapacitors for high-performance integrated energy storage. Proc. Natl. Acad. Sci. 112, 4233–4238. https://doi.org/10.1073/pnas.1420398112 (2015). Article CAS PubMed PubMed Central ADS Google Scholar Conway, B. E. Electrochemical Supercapacitors: Scientific Fundamentals and Technological Applications (Plenum Press, 1999). Jiang, Y. & Liu, J. Definitions of pseudocapacitive materials: A brief review. Energy Environ. Mater. 2, 30–37. https://doi.org/10.1002/eem2.12028 (2019). Sahoo, R., Pal, A. & Pal, T. In Noble Metal-Metal Oxide Hybrid Nanoparticles (eds Satyabrata, M. et al.) 395–430 (Woodhead Publishing, 2019). Deng, W., Ji, X., Chen, Q. & Banks, C. E. Electrochemical capacitors utilising transition metal oxides: An update of recent developments. RSC Adv. 1, 1171–1178. https://doi.org/10.1039/C1RA00664A (2011). Trasatti, S. & Buzzanca, G. Ruthenium dioxide: A new interesting electrode material. Solid state structure and electrochemical behaviour. J. Electroanal. Chem. Interfacial Electrochem. 29, A1–A5. https://doi.org/10.1016/S0022-0728(71)80111-0 (1971). Chen, Y. M., Cai, J. H., Huang, Y. S., Lee, K. Y. & Tsai, D. S. Preparation and characterization of iridium dioxide–carbon nanotube nanocomposites for supercapacitors. Nanotechnology 22, 115706. https://doi.org/10.1088/0957-4484/22/11/115706 (2011). Article CAS PubMed ADS Google Scholar Sk, M. M., Yue, C. Y., Ghosh, K. & Jena, R. K. Review on advances in porous nanostructured nickel oxides and their composite electrodes for high-performance supercapacitors. J. Power Sources 308, 121–140. https://doi.org/10.1016/j.jpowsour.2016.01.056 (2016). Wang, D.-W., Li, F. & Cheng, H.-M. Hierarchical porous nickel oxide and carbon as electrode materials for asymmetric supercapacitor. J. Power Sources 185, 1563–1568. https://doi.org/10.1016/j.jpowsour.2008.08.032 (2008). Duraisamy, N., Numan, A., Fatin, S. O., Ramesh, K. & Ramesh, S. Facile sonochemical synthesis of nanostructured NiO with different particle sizes and its electrochemical properties for supercapacitor application. J. Colloid Interface Sci. 471, 136–144. https://doi.org/10.1016/j.jcis.2016.03.013 (2016). Chen, W., Gui, D. & Liu, J. Nickel oxide/graphene aerogel nanocomposite as a supercapacitor electrode material with extremely wide working potential window. Electrochim. Acta 222, 1424–1429. https://doi.org/10.1016/j.electacta.2016.11.120 (2016). Liu, T., Jiang, C., Cheng, B., You, W. & Yu, J. Hierarchical flower-like C/NiO composite hollow microspheres and its excellent supercapacitor performance. J. Power Sources 359, 371–378. https://doi.org/10.1016/j.jpowsour.2017.05.100 (2017). (ed National Energy Board) (2017). Fuhr, B. J., Hawrelechko, C., Holloway, L. R. & Huang, H. Comparison of bitumen fractionation methods. Energy Fuels 19, 1327–1329. https://doi.org/10.1021/ef049768l (2005). Handle, F. et al. Tracking aging of bitumen and its saturate, aromatic, resin, and asphaltene fractions using high-field fourier transform ion cyclotron resonance mass spectrometry. Energy Fuels 31, 4771–4779. https://doi.org/10.1021/acs.energyfuels.6b03396 (2017). Zhu, J. et al. Engineering cross-linking by coal-based graphene quantum dots toward tough, flexible, and hydrophobic electrospun carbon nanofiber fabrics. Carbon 129, 54–62. https://doi.org/10.1016/j.carbon.2017.11.071 (2018). Qin, F. et al. From coal-heavy oil co-refining residue to asphaltene-based functional carbon materials. ACS Sustain. Chem. Eng. 7, 4523–4531. https://doi.org/10.1021/acssuschemeng.9b00003 (2019). He, X. et al. Porous carbon nanosheets from coal tar for high-performance supercapacitors. J. Power Sources 357, 41–46. https://doi.org/10.1016/j.jpowsour.2017.04.108 (2017). Xu, C. et al. Synthesis of graphene from asphaltene molecules adsorbed on vermiculite layers. Carbon 62, 213–221. https://doi.org/10.1016/j.carbon.2013.05.059 (2013). Enayat, S. et al. From crude oil production nuisance to promising energy storage material: Development of high-performance asphaltene-derived supercapacitors. Fuel 263, 116641 (2020). Qin, F., Tian, X., Guo, Z. & Shen, W. Asphaltene-based porous carbon nanosheet as electrode for supercapacitor. ACS Sustain. Chem. Eng. 6, 15708–15719 (2018). Yi, C.-Q., Zou, J.-P., Yang, H.-Z. & Leng, X. Recent advances in pseudocapacitor electrode materials: Transition metal oxides and nitrides. Trans. Nonferrous Metals Soc. China 28, 1980–2001. https://doi.org/10.1016/S1003-6326(18)64843-5 (2018). Chen, M.-S. et al. Controllable growth of carbon nanosheets in the montmorillonite interlayers for high-rate and stable anode in lithium ion battery. Nanoscale 12, 16262–16269. https://doi.org/10.1039/D0NR03962D (2020). Guo, X., Liu, G., Yue, S., He, J. & Wang, L. Hydroxyl-rich nanoporous carbon nanosheets synthesized by a one-pot method and their application in the in situ preparation of well-dispersed Ag nanoparticles. RSC Adv. 5, 96062–96066. https://doi.org/10.1039/C5RA18300F (2015). Zhang, S. et al. Construction of hierarchical porous carbon nanosheets from template-assisted assembly of coal-based graphene quantum dots for high performance supercapacitor electrodes. Mater. Today Energy 6, 36–45. https://doi.org/10.1016/j.mtener.2017.08.003 (2017). Yuan, K. et al. Two-dimensional core-shelled porous hybrids as highly efficient catalysts for the oxygen reduction reaction. Angew. Chem. 55, 6858–6863. https://doi.org/10.1002/anie.201600850 (2016). Zhuang, X., Zhang, F., Wu, D. & Feng, X. Graphene coupled Schiff-base porous polymers: Towards nitrogen-enriched porous carbon nanosheets with ultrahigh electrochemical capacity. Adv. Mater. 26, 3081–3086. https://doi.org/10.1002/adma.201305040 (2014). Chen, L. et al. Porous graphitic carbon nanosheets as a high-rate anode material for lithium-ion batteries. ACS Appl. Mater. Interfaces 5, 9537–9545. https://doi.org/10.1021/am402368p (2013). Zhang, W., Zhang, P., Wang, Y. & Li, J. Preparation of Mg(OH)2 nanosheets and self-assembly of its flower-like nanostructure via precipitation method for heat-resistance application. Integr. Ferroelectr. 163, 148–154. https://doi.org/10.1080/10584587.2015.1042793 (2015). Lozano-Castelló, D. et al. Influence of pore structure and surface chemistry on electric double layer capacitance in non-aqueous electrolyte. Carbon 41, 1765–1775. https://doi.org/10.1016/S0008-6223(03)00141-6 (2003). Sanchez-Sanchez, A. et al. Structure and electrochemical properties of carbon nanostructures derived from nickel(II) and iron(II) phthalocyanines. J. Adv. Res. 22, 85–97. https://doi.org/10.1016/j.jare.2019.11.004 (2020). Liu, T. et al. Distribution of vanadium compounds in petroleum vacuum residuum and their transformations in hydrodemetallization. Energy Fuels 29, 2089–2096. https://doi.org/10.1021/ef502352q (2015). Yakubov, M. R., Sinyashin, G. R. A. K. O., Milordov, D. V., Tazeeva, E. G., Yakubova, S. G., Borisov, D. N., Gryaznov, P. I., Mironov, N. A., & Borisova, Y. Y. In: Yusuf Y (ed) Phthalocyanines and Some Current Applications, Ch. 7, 153–168. (IntechOpen, 2017). Lota, K., Sierczynska, A. & Lota, G. Supercapacitors based on nickel oxide/carbon materials composites. Int. J. Electrochem. 2011, 321473. https://doi.org/10.4061/2011/321473 (2011). Wu, S.-R., Liu, J.-B., Wang, H. & Yan, H. NiO@ graphite carbon nanocomposites derived from Ni-MOFs as supercapacitor electrodes. Ionics 25, 1–8. https://doi.org/10.1007/s11581-018-2812-z (2019). Niu, Z. et al. All-solid-state flexible ultrathin micro-supercapacitors based on graphene. Adv. Mater. 25, 4035–4042 (2013). Liu, F., Song, S., Xue, D. & Zhang, H. Folded structured graphene paper for high performance electrode materials. Adv. Mater. 24, 1089–1094 (2012). Jeong, H. M. et al. Nitrogen-doped graphene for high-performance ultracapacitors and the importance of nitrogen-doped sites at basal planes. Nano Lett. 11, 2472–2477 (2011). Cong, H.-P., Ren, X.-C., Wang, P. & Yu, S.-H. Flexible graphene–polyaniline composite paper for high-performance supercapacitor. Energy Environ. Sci. 6, 1185–1191 (2013). Kahimbi, H., Hong, S. B., Yang, M. & Choi, B. G. Simultaneous synthesis of NiO/reduced graphene oxide composites by ball milling using bulk Ni and graphite oxide for supercapacitor applications. J. Electroanal. Chem. 786, 14–19 (2017). Al-Enizi, A. M. et al. Synthesis and electrochemical properties of nickel oxide/carbon nanofiber composites. Carbon 71, 276–283 (2014). Liu, M. et al. Encapsulation of NiO nanoparticles in mesoporous carbon nanospheres for advanced energy storage. Chem. Eng. J. 308, 240–247 (2017). The authors acknowledge the support from Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant (Q. Lu), the Start-up Fund from the University of Calgary (Q. Lu), Canada Foundation of Innovation (CFI) (Q. Lu), and the University of Calgary's Canada First Research Excellence Fund (CFREF) program, entitled the Global Research Initiative (GRI) in Sustainable Low-Carbon Unconventional Resources. Department of Chemical and Petroleum Engineering, University of Calgary, Calgary, AB, T2N 1N4, Canada Dinesh Mishra, Rufan Zhou, Md. Mehadi Hassan, Jinguang Hu, Ian Gates, Nader Mahinpey & Qingye Lu Dinesh Mishra Rufan Zhou Md. Mehadi Hassan Jinguang Hu Ian Gates Nader Mahinpey Qingye Lu D.M. conducted most of the experiments and wrote the original manuscript. R.Z. and M.H. conducted some of the experiments. J.H., I.G. and N.M. supervised the students. Q.L. designed the project, supervised the students and edited the manuscript. Correspondence to Qingye Lu. Mishra, D., Zhou, R., Hassan, M.M. et al. Bitumen and asphaltene derived nanoporous carbon and nickel oxide/carbon composites for supercapacitor electrodes. Sci Rep 12, 4095 (2022). https://doi.org/10.1038/s41598-022-08159-3 About Scientific Reports Guide to referees Journal highlights Scientific Reports (Sci Rep) ISSN 2045-2322 (online)
CommonCrawl
What is the second conserved Quantity of the Pendulum? Consider the problem of a classical pendulum whose state can be described by a function $\theta(t)$ where $\theta$ is measured from the line directly below. We then have that our pendulum's $\theta$ obeys the following differential equation $$ \frac{d^2 \theta}{dt^2 } + \frac{g}{l}\sin \theta = 0 $$ Through the trick $ K = \frac{d\theta}{dt}, K \frac{dK}{d\theta} = \frac{d^2 \theta}{dt^2}$ we can re-write the above differential equation as a different one and then integrate it to find that there is a constant $Q_0$ such that $$ \frac{1}{2} \left( \frac{d \theta}{d t} \right)^2 - \frac{g}{l} \cos(\theta) = Q_0 $$ It's fruitful to ask "what does this really mean?", what is that $Q_0$ actually supposed to be? and by multiplying both sides by $ ml^2 $ we find rather enlightening that we have the following: $$ \underbrace{\frac{1}{2} ml^2 \left( \frac{d \theta}{d t} \right)^2}_{\text{Kinetic Energy}} + \underbrace{-mgl \cos(\theta)}_{\text{Gravitational Potential Energy}} = ml^2 Q_0 = E_0 $$ And now this is much less mysterious, it is clear this $Q_0$ is just a scaled version of $E_0$ the total energy of our system, which is constant as we should expect. Of course we can continue going forward here... Before we added the extra mass-length information the differential equation could have been re-written as: $$ \frac{1}{\sqrt{2Q_0 + \frac{g}{l} 2\cos(\theta)}} \frac{d \theta}{d t} = 1$$ Again this can be integrated to yield another quantity... $$ \sqrt{\frac{2}{Q_0 + \frac{g}{l}}} F \left[ \frac{\theta}{2} , 2 \frac{g}{l} \frac{1}{Q_0 + \frac{g}{l}} \right] = t + Q_1 $$ This suggests then that the following is true... $$ \sqrt{\frac{2}{Q_0 + \frac{g}{l}}} F \left[ \frac{\theta}{2} , 2 \frac{g}{l} \frac{1}{Q_0 + \frac{g}{l}} \right] -t = Q_1 $$ I.E. there is some quantity $Q_1$ which does NOT vary with time, and can be found through that horrendous looking left hand side. What conserved Quantity is this $Q_1$ supposed to represent kinematically? It should be something akin to a "Second Energy" or "Momentum" of our pendulum but I can't figure what this thing is supposed to be and there doesn't seem to be any descriptions of it online. It does appear to be intimately related to the period. One could also theoretically verify it is conserved by measuring that LHS in an experiment and confirming it doesn't vary with time. Some realization: If you declare the state of your system to be $S$ at time $t=0$ then at any time thereafter you would also declare that "back in $t=0$ the state was $S$". The 'conservation' of $Q_1$ appears to be a restatement of just that. classical-mechanics energy-conservation conservation-laws Volker Siegel frogeyedpeasfrogeyedpeas $\begingroup$ Interesting. But I'd be surprised if in this simple model there were another conserved quantity besides energy. Could it be that that "horrendous looking left hand side" can actually be rewritten in terms of the energy only? $\endgroup$ – fra_pero $\begingroup$ Related: physics.stackexchange.com/q/8626/2451 $\endgroup$ – Qmechanic ♦ $Q_1$ is not a conserved quantity at all. It is just a parameter which depends on the initial conditions. First of all, there's an error when you derived $$\frac{1}{\sqrt{2Q_0 + \frac{g}{l} 2\cos(\theta)}} \frac{d \theta}{d t} = 1\tag{1}$$ You only took the positive square root, whereas you should have take both the possibilities of the RHS being $+1$ and $-1$. You can easily see that the equation $(1)$ never holds whenever $\theta$ is decreasing i.e. $\mathrm d \theta /\mathrm d t<0$. To correct this, we need to add a modulus around the $\mathrm d \theta/\mathrm dt$ term. Thus the corrected equation would be $$\frac{1}{\sqrt{2Q_0 + \frac{g}{l} 2\cos(\theta)}} \left|\frac{d \theta}{d t}\right| = 1\tag{2}$$ I would advise you to re-integrate equation $(2)$ to find the correct solution which holds over the complete range of motion. What about the other equation The final equation which you obtained $$\sqrt{\frac{2}{Q_0 + \frac{g}{l}}} \left(F \left[ \frac{\theta}{2} , 2 \frac{g}{l} \frac{1}{Q_0 + \frac{g}{l}} \right]\right) -t = Q_1\tag{3}$$ only holds true for the cases where $\mathrm d \theta/\mathrm dt>0$, so for now, we'll only consider cases where the pendulum is going from left to right, but the insight provided below will also help you determine the physical meaning of the new constant that you would obtaing after integrating equation $(2)$. Also, the equation $(3)$ contains an incomplete elliptic integral of the first kind. One of the important properties of this function is that $$F[0,k]=0$$ where $k$ is any real number. Thus, substituting $\theta=0$ in equation $(1)$, we get \begin{align} \sqrt{\frac{2}{Q_0 + \frac{g}{l}}} \left(F \left[ 0 , 2 \frac{g}{l} \frac{1}{Q_0 + \frac{g}{l}} \right]\right) -t_0 &= Q_1\\ 0-t_0&=Q_1\\ Q_1+t_0&=0\tag{4} \end{align} where $t_0$ is the time when the pendulum passes throught its equilibrium position for the first time. And since we are only considering the case where $\mathrm d\theta /\mathrm dt>0$, thus the above equation is valid only for the cases where the pendulus comes from the left and goes to the right while passing through the equilibrium position. Physical Significance The physical significance of the constant $Q_1$ isn't as deep and profound as you expected. $Q_1$ is just a shifting constant applied to the time. This constant will change upon changing your definition of $t=0$. Thus, it's just a parameter which adjusts/shifts the time scale of the oscillation. It adjusts according to the initial conditions and doesn't give you any more information about the dynamical parameters of the system. $\begingroup$ This reminds me of Landau's Mechanics, where in chapter two he notices that since the equations of motion are 2nd order differential equations, for every degree of freedom we have 2 constants of integration. So, for $d$ DOFs, the solution depends on $2d$ constants, which are effectively constants of motion. However, since time is translation-invariant, one of these constants always shifts the initial time. Hence, for $d$ DOFs we have always $2d-1$ independent constants of motion. For the pendulum $d=1$, so we have one constant of motion (energy), plus initial time shifting. $\endgroup$ – HicHaecHoc $\begingroup$ @HicHaecHoc Yeah, that's a great insight while solving problems. $\endgroup$ $\begingroup$ Why is $F$ an incomplete elliptic integral of the first kind instead of a function? $F[0,k]$ means taking the integral from zero to zero, which is, rather trivially, zero. This holds for any integral. $\endgroup$ – Deschele Schilder $\begingroup$ @descheleschilder No, the OP obtained the incomplete elliptic integral of the first kind after integrating their differential equation (which is correct from the POV of integration). $F$ isn't a generic function. It is a specific function used to represent elliptic integrals which can't be expressed in closed form using well known functions. $\endgroup$ $\begingroup$ @descheleschilder Yeah, so the property that $F(0,k)=0$ is more of a trivial result than a specific important property. Nonetheless, the answer still holds. $\endgroup$ Suppose that in any physical system, the solution to the equation of motion is $x(t) = f(t, x_0, v_0)$. Then $x - f = 0$, so it's conserved. In this way, you can manufacture a new conserved quantity for any physical situation. You can also add on any function $g(x_0, v_0)$ of the initial conditions, giving an infinite family of conserved quantities $x - f + g$. This is what you found. For example, for a ball in freefall, you can easily check that $x - (x_0 + v_0 t - gt^2/2) + x_0$ is conserved, for this reason. But this isn't a new conserved quantity at all -- it's just a minor rewriting of the solution to the equation of motion, whose particular value is the initial position. You can't use this idea to do anything. If you don't already know the general solution $f$ then you can't compute $x-f+g$, if you do know $f$ then you don't need it, and if you don't know $f$ but somehow know the numeric value of $x-f+g$, that just tells you about the initial conditions, which you already knew anyway. knzhouknzhou The configuration space of a pendulum is 1D (in fact, a circle, $S^1$) so it's phase space is 2D (a cylinder, $S^1\times \mathbb{R}$). If there were two integrals of motion then we could label each point in the 2D phase space by those two values, and since they're meant to be conserved, the phase space dynamics would have to be trivial (i.e. position and momenta never change). So whatever your $Q_1$ is it is either: (a) Some function of $Q_0$ so not an independent integral of motion or (b) A weaker kind of conserved quantity that is not just a function of phase space coordinates. For instance, the initial angle and angular velocities are strictly conserved quantities along a trajectory. I suspect your $Q_1$ can be written in terms of the initial conditions, ie is of type (b). jacob1729jacob1729 I'm not sure that this represents an actual "conserved quantity." In order for it to, you'd need it to satisfy $$ \{ H, Q_1\} = \frac{\partial Q_1}{\partial t}. $$ Here, I am taking $H = Q_0$ to be the (rescaled) Hamiltonian which governs time evolution. Then the momentum is $p$ which, by Hamilton's equation, is $p = \dot \theta$. The problem is you have defined $Q_1$ in terms of $Q_0$, which is really a function of $\theta$ and $p$. In order for this to be a true conserved quantity, it would have to satisfy the above equation on phase space (using the full definition of $Q_0(\theta, p)$ when plugged into $Q_1$) which I don't think it does, but I could be wrong. For an ODE system with $N$ initial conditions, there are $N$ conserved quantities, say, $q_i(t)-f_i(q_1,q_2,...,q_N, t)=0$, where $q_i$ are coordinates and velocities of a general meaning and $f_i$ are the corresponding solutions of the ODE. Any combination of conserved quantities is also a conserved quantity. You may add to the left-hand side and to the right-hand side any constant, for the simplest example. And now you may combine them into complicated functions to be constants too. Vladimir KalitvianskiVladimir Kalitvianski Let's take a dimensional analysis approach: You end up in the first part with: $$ ml^2 Q_0 =E_0,$$ writing that it is clear that $Q_0$ is just a scaled version of $E_0$ (so also energy). Dimensional analysis shows that this can't be the case. Insofar the mass $m$ is concerned one can write: $F=ma$, which translated in units gives: $N=kg\frac{m}{{sec}^2}$, with the result that mass has the unit $\frac{N{sec}^2}{m}$. $l^2$ has obviously the unit $m^2$. Combining both, $ml^2$ has the unit $Nm{sec}^2=J{sec}^2$. It follows, because $E_0$ has the unit $J$, $Q_0$ must have the unit $\frac{1}{{sec}^2}$ (because $ml^2$ has the unit $J{sec}^2$). This means $Q_0$ is not energy, which implies it is not a scaled version of $E_0$ (which would be the case if $ml^2\gt 1$ was a dimensionless scaling factor, which it is not). So the first quantity $Q_0$ is just $\frac{E_0}{ml^2}$, which is obviously conserved ($E_0$, $m$, and $l$ are constant for the pendulum). When it comes to the second quantity $Q_1$ note that in your last equation, $$ \sqrt{\frac{2}{Q_0 +\frac{g}{l}}}F\left[\frac{\theta}{2},2\frac{g}{l}\frac{1}{Q_0 + \frac{g}{l}}\right]-t = Q_1 ,$$ the square root part is constant and has the unit $sec$ ($Q_0$, $g$, and $l$ are all constants; both $Q_0$ and $\frac{g}{l}$ have the unit $\frac{1}{{sec}^2}$, so the square root of their inversed sum has the $sec$ as a unit). Let's call this point in time $t_2$. Because we can't subtract two quantities with different units (the expression containing the square root and $F$, minus $t$), $F$ will have to spit out a real number without a unit. Note that the second argument of $F$ has a constant value without a unit ($2\frac{g}{l}$ times the inverse of $Q_0 +\frac{g}{l}$, gives the unit $\frac{{sec}^2}{{sec}^2}=1$, i.e. no unit at all) for every dimensionless $\theta$ in the first argument, so the product with the square root has the unit $sec$, as it should. For every angle $\theta$, let's call the constant real number without a unit spit out $F$, so the product of the square root and $F$ gives $Ft_2$ It's clear that $Q_1$ has the $sec$ for a unit so it represents a point in time. Let's call this point $Ct_1$, in which $C$ is a constant without a unit. Altogether this gives $$Ft_2 -t=Ct_1$$ so $$Ft_2-Ct_1=t$$ This means that $Q_1$ ($Ct_1$) is just the time that can be subtracted from a given time $Ft_2$ with as result the time $t$. Deschele SchilderDeschele Schilder An Analogy Let's examine the conceptually equal system the motion of a freely falling mass $m$ from height $h$ relative to the ground it falls on, which has one DOF (the vertical displacement), and is described by the same kind of differential equation: $$ \frac{d^2 s}{dt^2 }-g = 0 $$ We can apply the same trick: $K=\frac{ds}{dt}$, $K\frac{dK}{ds}=K\frac{d(\frac{ds}{dt})}{ds}=K\frac{1}{dt}= \frac{d^2 s}{dt^2 }$, and integrate wrt to $s$: $$\int_0^h(K\frac{dK}{ds}-g)ds=\frac{1}{2}K^2-gh=Q_0,$$ so, after multiplying both sides with the mass $m$ one gets $$\frac{1}{2}mv^2-mgh=E_0=mQ_0,$$ so $Q_0=\frac{E_0}{m}$ and this equation expresses the conservation of energy $E_0$. So, $$v^2=2\frac{E_0}{m}+2gh,$$ from which it follows $$\frac{1}{\sqrt{\frac{2E_0}{m}+2gh}}\frac{ds}{dt}=1,$$ so $$\int_0^h\frac{1}{\sqrt{\frac{2E_0}{m}+2gh}}ds=t+C_1$$ Now both quantities under the square root sign have $\frac{m^2}{{sec}^2}$ so the inverse square root function has the unit $\frac{sec}{m}$. This means that the integral of this function over $ds$ (which has the value $Ch$ in which $C$ is the constant inverse square root) has the dimension time and so has $Q_1$. It's the difference between $Ch$ and a time somewhere on the trajectory of the freely falling mass. Not the answer you're looking for? Browse other questions tagged classical-mechanics energy-conservation conservation-laws or ask your own question. If all conserved quantities of a system are known, can they be explained by symmetries? What term describes the trajectory space splitting behavior when parametrizing a pendulum? Correction to Period of a Pendulum Angle rotated by a rod when it's hit by a pendulum Non-linear pendulum (Whittaker's treatise on analytical dynamics of particles) Which way is the pendulum swinging? Pendulum: how to measure the value of the decay constant $\tau$ experimentally? Deriving the Hamiltonian for a simple pendulum using mechanical momentum as a free parameter
CommonCrawl
Sucrose- and formaldehyde-modified native starches as possible pharmaceutical excipients in tableting Ifeanyi Justin Okeke1,2, Angus Nnamdi Oli3, Chioma Miracle Ojiako ORCID: orcid.org/0000-0001-8126-01353,4, Emmanuel Chinedum Ibezim2 & Jude N. Okoyeh5 Starches have been shown to be important across various disciplines such as the pharmaceutical industries, food industries and also paper industries. Starch is basically a mixture of polymers consisting of α-d-glucose as the monomeric unit. The goal of this study is to modify the native starches which were obtained from Zea mays, Triticumestivum and Oriza sativa through cross-linking (using sucrose and formaldehyde at different concentrations) and also to assess the utilizability of the modified starches as potential excipients [binder] for tableting of paracetamol. Maize and rice starches cross-linked with 2.5% sucrose gave the least percentage moisture content. The batches cross-linked with 40% formaldehyde showed the highest moisture content. The densities (bulk and tapped) of maize wheat and rice starches showed a reduction with the increasing concentration of the cross-linking agent for sucrose, which is the reverse case for formaldehyde. The different concentrations of sucrose and formaldehyde cross-maize, wheat and rice starches had pH values between 4.50 and 5.52. The onset and end set of the glass transition temperatures were varied for all the starches modified with formaldehyde. The melting peak temperatures obtained indicated that the formaldehyde-modified rice starch had significantly lower melting temperature than those of wheat and maize starches. This study reveals that various concentrations of sucrose and formaldehyde had some influence on the properties of the native starches and resulted in the production of new starch motifs with improved or new functionalities suitable for use as drug excipients in tableting. Starches and their pharmaceutical uses have been studied extensively by many researchers. Starch is found in just about all green plants in the form of a carbohydrate reserve. It is a natural polymer which is generated from carbon dioxide and water by the photosynthesis in plants (Jankovi 2010). Starch is the chemical storage form of solar energy (Kolawole et al. 2013). It is basically a mixture of polymers which consists of α-d-glucose as the monomeric unit. Two types of starch polymers exist: a mixture of linear polymers (amylase) and a mixture of branched polymers (amylopectin). These polysaccharides exist in the plant as granules that are insoluble in cold water (Pérez and Bertoft 2010). The major intermolecular forces giving rise to the structure and integrity of the starch system are the hydrogen bonds between these units and water. Starch is highly present in staple foods, e.g., potatoes, maize, wheat, rice and cassava (Umida-Khodjaeva 2013), as it is the commonest carbohydrate form in human diet. Starches (most especially modified starches) have widespread use in pharmaceutical, food and paper industries, and they contribute to the quality, appearance and structure of the food item (Hoover et al. 2010; Agyepong and Barimah 2018). Pharmaceutically, they are used as binders and diluents, in paper industries as binder for laminates and in the corrugating process, as surface sizing and coating agents (Garcia et al. 2020). Also, in food industries, starches and their modified forms have been used in canned, hot-filled, dry mix, baked or frozen foods, pet and infant foods, snacks and breakfast cereals, meats, dairy products, etc. (). Cross-linking is a processing method in which small amounts of compounds, which have the ability to react with more than one hydroxyl group, are incorporated to starch polymers. Cross-linking of starch is a highly popular method employed in polysaccharide chemistry (Ayoub and Rizvi 2009; Ibrahim et al. 2018). Degradation of starch can be done by pre-gelatinization (Alabi et al. 2018). Gelatinization employs the application of heat and water to disrupt the granular and crystalline structures of starch (Adetunji 2019). A lot of biopolymers like starch are hydrophilic, and some are soluble in hot water. Native starch is incompatible with some hydrophobic polymers and hence cannot be used directly (Zhang et al. 2015). Cross-linking has shown several effects on the physical properties of starch by varying its properties, and this is due to the fact that native starch does not usually possess the preferred properties (Shah et al. 2016; Ibezim and Andrade 2006; Thanh-Blicharz et al. 2021). The use of highly cross-linked starch amylose matrices has been described in the formulation of controlled release oral solid dosage forms (Elgaied-Lamouchi et al. 2020). Therefore, pharmaceutically, starch is a vital ingredient and no amount of study on its uses can be exhaustive. Changing starch so that it obtains the characteristics that deviate from the native starch is known as 'modification of starch' and the products are called 'derivatives.' There have been various modifications of starch granules, e.g., blending (Neelam et al. 2012; Sapsford et al. 2013). It does not matter whether or not change occurs in a physical or bio-chemical manner (Piriyaprasarth et al. 2010). The chemical modification of starch has proven to affect the digestion rate and level in the small intestine (Haub et al. 2010; Shen et al. 2019; Siroha and Sandhu 2018). Excipients are defined as substances that are included in a drug delivery system (aside the active drug), and have been appropriately evaluated for safety. Pharmaceutical excipients are further explained as additives which are used in converting compounds which are pharmacologically active to pharmaceutical dosage forms which are appropriate for administration to patients (Persson and Alderborn 2018; Mohammed 2017). Pharmaceutical excipients should be stable physically and chemically, non-toxic, available commercially, feasible economically and possess pleasant organoleptic properties (Allamneni and Suresh 2014). Starch is a commonly used excipient due to its relatively low cost and versatility (Muazu et al. 2012). Previously, native starches were used in solid dosage forms (as binders and disintegrants), but their utilization is restricted due to poor flowability. Nowadays, modified or cross-linked starches are mostly preferred, e.g., pregelatined starch (Mohammed 2017). Modification enhances the starch quality for use in pharmaceuticals as drug binders and disintegrants (Adjei et al. 2017). Modified rice starch, starch acetate, etc., are now established in pharmaceutical industries as multifunctional excipients (Lawal 2019). Different forms of modified starches have also been analyzed for maintaining the release of drug for improved compliances (Elgaied-Lamouchi et al. 2021). Acid-hydrolyzed modified starch of Plectranthus esculentus has been reported to produce fillers/binders that can be directly compressed and can serve as alternatives for MCC PH 101 (microcrystalline cellulose) in modern tablet formulations (Khalid et al. 2016). A study has shown that cross-linked Cyperus esculentus (tiger nut) starch and sodium alginate have the ability to sustain ibuprofen release making useful tool for targeted drug delivery, especially to the lower GIT (Olayemi et al. 2020). The aim of this study is to modify the native starches which were obtained from Zea mays, Triticumestivum and Oriza sativa through cross-linking (using sucrose and formaldehyde at different concentrations) and also to assess the utilizability of the modified starches as potential excipients [binder] for tableting of paracetamol. All the methods used were developed in-house except where indicated otherwise. Extraction of starch The maize [Zea mays], wheat [Triticumestivum] and rice [Oriza sativa] grains were procured from Nsukka Central Market, Enugu state, Nigeria. The grains were identified by Mrs. Onwunyili Amaka at the Department of Pharmacognosy and Traditional Medicine, Faculty of Pharmaceutical Sciences, Nnamdi Azikiwe University Awka, Anambra state, Nigeria, with the voucher number (PCG/474/A/052). The grains were then washed and soaked separately in distilled water for 24 h. After fermentation, they were grated (using an aluminum grater) and the resulting marsh was macerated for 24 h in distilled water and sieved with a fine nylon sieve, and the slurry allowed to stand undisturbed for ten hours [10 h]. The supernatant was then decanted and further washed with purified water to remove any soluble impurities that may be present. The final slurry was kept to stand for another 10 h, and then, it was oven-dried at 60 °C for 2 hours and milled. This procedure was carried out on the three sources of starch. Confirmatory test This was done according to the British Pharmacopeia (Pharmacopoeia 2002) to confirm the starches. A 1-mg quantity of each starch was boiled with 50 ml of distilled water and then cooled. Mucilage was formed, to which 1 ml of iodine solution was added and observed. Defatting of starches This was carried out with aqueous methanol [85%] using a Soxhlet extractor. A 200-g quantity of Z. mays starch was extracted for 24 h in a Soxhlet extractor using 85% v/v aqueous methanol as solvent. The defatted starch was oven-dried and then pulverized to reduce its particle size with the aid of a pestle and mortar, and the resulting powder was stored in well-dried plastic containers. The same procedure was repeated for T. aestivum and O. sativa starches, respectively. Cross-linking of maize starch A 20-g dried maize starch was treated with 30 ml of ethanol to make it reactive. The slurry was then filtered to get back the starch residue. A slurry of the reactive starch was made in an alkaline medium using 0.5% NaOH and different concentrations of sucrose and formaldehyde used as the cross-linking agents [2.5, 5,10,20,40%]. The mixture was kept at a temperature of 40 °C for 30 min with continuous stirring. Subsequently, the pH of the mixture was further adjusted to about 5.0 with 0.1 N HCl after which it was washed and dried to recover the cross-linked maize starch. After drying, the particle size of the maize starch was reduced by passing through a 0.17-mm mesh. This procedure was repeated for both T. aestivum and O. sativa starches. Starch powder characterization Moisture content determination One-gram quantity of the starch was heated in an oven at 100 °C for 3 h and the weight of the starch compared with the original weight, prior to heating. This process was continued until a constant weight was obtained. This final weight was noted and used to calculate the percentage moisture content. The following equation (Onochie et al. 2020) was used to calculate the moisture content of the sample: $${\text{Percentage}}\,{\text{of }}\,{\text{moisture}}\,{\text{in}}\,{\text{a}}\,{\text{sample}} = \frac{{100\left( {{\text{Wet }}\,{\text{sample}}\,{\text{weight}} - {\text{Dried}}\,{\text{sample}}\,{\text{weight}}} \right)}}{{{\text{dried}}\,{\text{sample}}\,{\text{weight}}}}.$$ Bulk density determination A 20-g starch powder was poured through a short glass funnel into a 100-ml graduated cylinder. The volume occupied was then measured and the bulk density determined. The bulk density was taken for the average of three determinations. This was repeated with the two remaining starches. Tapped density determination A 20-g starch powder was poured into a graduated 100-ml measuring cylinder and then dropped 20 times from 2.5 cm height onto a wooden bench. The final volume after "tapping" was recorded and was used to calculate the tapped density. This was repeated for the other two starches. Determination of the effect of electrolyte [NaCl] on swelling behavior Ofner and Schott (1986) method was used with some modifications: About five different concentrations of NaCl [2.0 N, 1.0 N, 0.5 N and 0.1 N] were carefully added to the starch samples in a 10-ml measuring cylinder and the NaCl solution was allowed to get absorbed by the starch after which the unabsorbed solution was decanted. The product was left to stand at room temperature for 48 h. This was done for each of the cross-linked starches which were prepared with varying concentrations of cross-link agents [sucrose and formaldehyde]. The changes in volumes of the starches were recorded and the swelling extents calculated after 48 h. pH determination This was done on each formulated batch by inserting a pH meter into the slurry of the product and checking for the stabilization of the pH reading before taking the results. A 1% gel was prepared by dispersing 1 g of starch in 20 ml of distilled water. The starch dispersion was heated using a water bath at a temperature of about 80–100 °C for 3 min while being continuously stirred to gelatinize. The gelatinized starch was raised up to 100 ml with distilled water and further stirred to obtain complete homogeneous mixture. A part of the starch solution was then poured into a U-tube viscometer and its viscosity determined. This test was done in triplicates, and the results were recorded. The viscosity determination was carried out at room temperature. Differential scanning calorimetry (DSC) Method used by Adejumo et al. (2021) was used with some modifications. Investigations of the thermal transitions of the starch samples were done using a heat flux calorimeter (DSC-204 F1 Phoenix®, NETZSCH, 6.240.10 apparatus, Germany), which was calibrated using an indium of high-purity standard. A 1-mg starch sample was weighed into a high-temperature nimonic steel pan and a large quantity of water was added to yield a ratio of approximately 1:3. The pan was sealed, equilibrated (at 25 °C for 3 h) and also heated at a rate of 3 °C/min, from 25 to 220 °C. The transition temperatures were the onset gelatinization temperature (To), peak temperature (Tp) and also the conclusion temperature (Tc). The enthalpy of gelatinization was related to the dry mass of the sample. The data were analyzed using GraphPad Prism software version 5.0. The inferential statistics used were analysis of variance (ANOVA), Bartlett's test for equal variances and Bonferroni's multiple comparison test. P-values < 0.05 (at 95% Confidence Interval) were taken to be significant. Yield of extractions The percentage weight of starch extracted from Zea mays grain, Triticumestivum and Oryza sativa is 48.3% w/w, 27.5% w/w and 17.3% w/w, respectively. The result of the starch confirmatory test showed that product from Zea mays, Triticumestivum andOryza sativa were positive to iodine solution. Properties of cross-linked starches Percentage moisture contents From the result in Table 1, maize and rice starches cross-linked with 2.5% of sucrose gave the least percentage moisture content. Wheat starch, combined with 10%, 20%, 40% sucrose and 2.5% formaldehyde, also had 20% moisture content, while, in all the starches, the batches cross-linked with the 40% formaldehyde had the highest moisture content. Table 1 The moisture content determination Bulk and tapped density The bulk and tapped densities of maize, wheat and rice starches, as presented in Table 2, showed a decrease with a raise in the concentration of the cross-linking agent for sucrose, while in formaldehyde, reverse was the case in some cases. The bulk and tapped densities of maize decreased as concentration of the cross-linking agent for sucrose increases, while in formaldehyde, the reverse is the case. The sucrose cross-linked maize starch had greatest bulk and tapped density of 25 and 35 g/ml, respectively, while formaldehyde cross-linked also had highest bulk and tapped density of 27 and 40 g/ml. The 20% sucrose cross- linked maize starch has the highest Carr's index of 42.4 and Hausner's ratio of 1.74, while 40% formaldehyde cross-link had highest Carr's index of 32.5 and Hausner's ratio of 1.48 which showcase poor flowability of the starch powder. Table 2 The bulk and tapped densities of the cross-linked three starches The bulk and tapped densities of wheat starch reveal that 40% sucrose cross-linked wheat starch powder had the highest bulk and tapped density of 37 g/ml and 55 g/ml, respectively. Similarly, the 2.5% formaldehyde cross-linked wheat starch powder had the least bulk and tapped density of 9 g/ml and 13 g/ml. The bulk and tapped density of rice starch powder was significantly influenced (p value < 0.05) by the moisture content as the bulk density reduced greatly with a rise in moisture content. The result revealed that 40% sucrose rice cross-linked starch had the highest bulk density of 39.5 g/ml and 10% formaldehyde cross-linked rice starch powder had better Carr's index of 11.8 and Hausner's ratio of 1.13. Also 5%, 10%, 20% and 40% formaldehyde cross-linked rice starch powder with Carr's indices 12.2, 11.8, 12.8 and 12.5 and Hausner's ratio of 1.14, 1.13, 1.15 and 1.14, respectively, had good flowability. From the results, further physicochemical profiles of the different cross-linked starches are shown in Table 3. The percentage swelling of the native starches of maize, wheat and rice is higher than the cross-linked derivatives. The highest swelling for the cross-linked starches was observed in both sucrose and formaldehyde cross-linked rice starch, while the least swelling was observed in 5% and 40% sucrose cross-linked maize and wheat starches. Table 3 Further physicochemical profiles of the cross-linked maize, wheat and rice Hydrogen ion index (pH) The different concentrations of sucrose and formaldehyde cross-linked maize, wheat and rice starches had pH values between 4.50 and 5.52. The 5% formaldehyde cross-linked wheat starch had the highest pH of 5.53, while 20% formaldehyde sucrose cross-linked rice starch had the lowest pH of 4.5. The pH of almost all the concentrations of sucrose and formaldehyde cross-linked maize, wheat and rice starches with cross-linked maize and rice had pH above 5, while the exception of 10%, 20% and 40% formaldehyde cross-linked maize and rice starch had pH below 5, respectively. The sucrose and formaldehyde cross-linked maize, wheat and rice starches had acidic pH between 4.50 and 5.55. Thermal properties of cross-linked starches As observed in Tables 4 and 5, the onset and end set of the glass transition temperatures [Tg] were varied for all the starches which were modified with either sucrose or formaldehyde. The energy changes [∆H] for the glass transition were also varied at the various concentrations of cross-linking agent used. In the sucrose-modified maize and rice starches (Table 5), the onset of transition occurred at approximately similar temperature except at 10% sucrose concentration where it was significantly much higher. The results showed that the ∆Hs for the Tg were generally higher for the formaldehyde than the sucrose-modified maize starches except at 20% chemical agent concentration where the latter had significantly higher value (p value < 0.05). The melting peaks generally occurred at higher temperatures for the formaldehyde-modified maize starch than the sucrose-modified except at 2.5% chemical agent concentration where the reverse was the case. Table 4 Thermal properties of starches modified with formaldehyde Table 5 Thermal properties of starches modified with sucrose A two-way ANOVA of the data shows that the concentrations of the cross-linking agent accounted for 99.83% of the total variations seen in the thermal properties tested with a p value < 0.0001. The effect is considered extremely significant. Table 6 shows the effect of the thermal [DSC] treatment of the defatted and undefatted wheat, maize and rice starches as analyzed. For the defatted starch, the onset temperatures of the Tg were of the order: wheat › rice › maize, while those of the end set were: wheat › maize › rice. However, there was no significant difference in the thermal properties of the defatted and undefatted starches (p value > 0.05). The ∆Hs, obtained for the defatted starches, were of the order: maize › wheat › rice with significant differences between the values obtained. However, the melting peaks were recorded at 292.9 °C, 285.2 °C and 287.5 °C for the defatted wheat, maize and rice starches, respectively, showing only slight differences. Considering the maize starch, both the onset and end set temperatures of the glass [Tg] as well as the energy change [∆H] recorded were higher for the undefatted than the defatted. Table 6 Thermal properties of defatted and undefatted starches Native starch has limited applications due to the fact that it cannot exhibit the desired properties such the inability to withstand some conditions for processing, packing and compressibility. However, these limitations can be corrected by the modification of native starch and the most commonly used method is the chemical modification (Okeke et al. 2021). More so, the use of specific moisture and temperature conditions can lead to alterations in the physicochemical properties of starch since a lot of physical modifications involve the use of water and heat (Senanayake et al. 2013). Cross-linked maize and rice starch had the highest pore sizes which trap a huge amount of water, resulting in the highest moisture content. At 40% formaldehyde concentration, the moisture content of maize when treated with 40% formaldehyde is the highest as compared to wheat and rice. This demonstrates that formaldehyde treatment has a direct proportionate impact on maize starch, i.e., the percentage moisture contents fluctuated but typically increased as the quantity of cross-linker rose (Oladunmoye et al. 2014; Belibi et al. 2014). However, the variation in the concentration of formaldehyde treatment on rice showed a constant effect on moisture content. In all the starches, the batches cross-linked with the 40% formaldehyde had the highest moisture content. On the other hand, 2.5% sucrose gave a decrease in the moisture content of maize and rice, while 10, 20 and 40% sucrose gave a reduced effect in wheat starch. This shows that increasing the sucrose concentration has an inversely proportional effect on the moisture content of wheat starch. Enzyme activation and microbial multiplication may occur when there is a high moisture level. Low moisture content generally demonstrates a high level of stability during storage, protecting starches from mould formation and providing a high dry weight yield (Jubril et al. 2012). A moisture content of 12% or more will provide enough moisture for drug breakdown and microbial activity (Odeku et al. 2003). The flow properties of powders are crucial in the assessment of the adequacy of a material as a direct compression excipient. Hausner index and Carr's percent compressibility are regarded as indirect ways of measuring the flow property of powder. The Hausner index measures inter-particle friction, whereas the Carr's index measures a material's capacity to reduce in volume. Hausner ratio which is higher than 2.5% signifies poor flow, Carr's index less than 16% signifies good flowability, while values more than 35% signifies cohesiveness. As the value of these indexes increases, there is a reduction in the flow of the powder and this increases the likelihood of producing tablets with more weights variation (Okunlola and Odeku 2011). All starches obtained from all the three sources had an Hausner's ratio less than 2 and Carr's index greater than 16% (except for 10%, 20% and 40% formaldehyde-treated starches). Wheat cross-linked with 5% sucrose, with Carr's index of 17.4, which indicates low flowability and chances of producing tablets with weight variation (Jubril et al. 2012). Bulk, tapped and true densities are usually the measured density values, which are also used to analyze the major properties of powders. Bulk density gives details on the volume occupied by the inter-granular spaces, inner and external pores of the solids. It indicates the overall degree of packing in a specific volume, or coarseness of starch sample. Tapped density is referred to as the density after tapping or vibration. From the observations in Table 2, the bulk densities of maize reduced with a raise in the concentration of sucrose but increased with an increase in the concentration of formaldehyde. Wheat and rice showed a slight irregularity at 10% and 40% sucrose and decreased with an increase in the concentration of formaldehyde. At various sucrose and formaldehyde concentrations, the tapped densities exhibited an irregular/wavy effect. This demonstrates that increasing the quantity of cross-linking agents increases the volume and tapped density of the starch powder (Awolu and Olofinlae 2016). Swelling is widely accepted as an assessment of tablet disintegration ability. From the results obtained for swelling profile of different cross-linked starches in Table 3, the percentage swelling of native maize, wheat and rice starches is larger than that of cross-linked derivatives, which might be attributable to granule alteration that reduced hydration capacity in cross-linked starches. An increase in the concentration of the cross-linking agent resulted in an increase in the number of cross-links, conferring increased stability on the starch granule. As a result, the decrease in water absorption was more apparent at greater cross-linker concentrations. This suggests that cross-linking has an influence on the ease with which water may reach the starch. As a result, the swelling properties of the cross-linked polymer are reduced (Yu et al. 2016). Porosity determines the swelling ability of starch. The higher the porosity, the more the inter-particulate spaces where water could be absorbed (Carmona-Garcia et al. 2009).The increase in the ionic strength of the cross-linked starch decreased the osmotic pressure inside the charged paste and a reduction in its swelling. Cross-linking also caused a high elastic contraction of polymer network which counteracted the swelling process. Hydration leads to swelling, and it is dependent on the type and number of hydrophilic groups in the polymer structure. The highest swelling for the cross-linked starches was observed in both sucrose and formaldehyde cross-linked rice starch, while the least swelling was observed in 5% and 40% sucrose cross-linked maize and wheat starches. Swelling power is a parameter that is analyzed in theory of disintegration, which must be preceded by water penetration. Therefore, an increase in the percentage swelling of sucrose and formaldehyde cross-linked rice starches leads to the activation of the reactive moieties, which enhances the disintegrating properties of formulated tablets (Tesfay et al. 2020). The pH of any starch is an important factor in their applications in the pharmaceutical and other industries, due to the fact that it is an indication of the acidity or alkalinity of the liquid media (Ashogbon and Akintayo 2012). All cross-linked starches showed an overall slight reduction in their pH as compared with the native starch. The pH of almost all the concentrations of sucrose and formaldehyde cross-linked maize, wheat and rice starches was above 5 with the exception of 10%, 20% and 40% formaldehyde cross-linked maize and rice starch which had pH below 5, respectively. A slightly acidic pH will not pose a problem when the starch sample is employed as food additives (Awolu et al. 2020). Swelling and viscosity of cross-linked starches are the very important and useful features of assessing the level of cross-linking. Table 3 also illustrates the dependence of viscosity of starch on the concentration and type of cross-linkers. It is observed that the viscosity of the cross-linked wheat and rice decreased as the concentration of the cross-linker increases for the two cross-linkers except for maize which showed a slight increase at 40% sucrose and formaldehyde. Also, the viscosities of the cross-linked starches were generally different from those of the non-cross-linked (Native) ones. In general, the viscosities of the native starches were more than those of the cross-linked starches. This is in conjunction with Shah et al. (2016) that the degree of peak viscosities of cross-linked starches is inversely proportional to the concentration of cross-linking agent. Starch with a greater cross-linking level will show a lesser peak viscosity as compared with starch with lesser cross-linking levels. The thermal properties of the defatted and undefatted wheat, maize and rice starches were also analyzed as shown in Table 5. For maize starch, both the onset and end set temperatures of the glass (Tg) as well as the energy change (∆H) recorded were higher for the undefatted than the defatted. This shows that the process of defatting probably lowered the intermolecular forces within the starch sample leading to the requirement of less energy for the Tg process. Except for the onset temperature, similar trend was observed for the rice starch. There were no significant differences between the melting peaks recorded for the undefatted and defatted starches. Furthermore, considering the thermal parameters of the defatted and formaldehyde-modified wheat starches, it was shown that the Tg took place at lower temperatures for the samples treated with 2.5–20% formaldehyde. However, with increased formaldehyde concentrations up to 25–40%, the Tg occurred at much higher temperatures. Similar trend was observed for the ∆Hs involved in the transitions. The melting peaks generally occurred at slightly elevated temperatures for the formaldehyde-modified wheat starch samples. These observations indicate that the chemical modification resulted to wheat starch of a more ordered (crystalline) molecular conformation than the natural moiety (Gonenc and Us 2019 Mar). In comparison with the sucrose-modified wheat starch, the end set temperature of the Tg as well as the ∆H was significantly higher than for the formaldehyde-modified sample. However, the melting endotherm was higher for the formaldehyde-modified wheat starch than for the sucrose-modified sample. For the formaldehyde-modified maize starch, the onset temperatures of the Tg were obviously greater than that of the defatted maize starch for the various range of concentrations tested. The end set temperatures were, however, lower except at 20% formaldehyde concentration. Similarly, the ∆H for the Tg of the defatted maize starch was higher than those of the formaldehyde-modified except at 20% formaldehyde concentration. The melting peaks of the formaldehyde-modified maize starch were greater than for the untreated defatted sample (Šuput et al. 2016). In comparison with the sucrose-modified maize starch, the onset temperatures were also higher than for the untreated and defatted samples. The onset of transition occurred at approximately similar temperature except at 10% sucrose concentration where it was significantly much higher. The results showed that the ∆Hs for the Tg were generally higher for the formaldehyde than the sucrose-modified maize starches except at 20% chemical agent concentration where the latter had significantly higher value. The melting peaks generally occurred at higher temperatures for the formaldehyde-modified maize starch than the sucrose-modified except at 2.5% chemical agent concentration where the reverse was the case. Both the onset and end set temperatures of the Tg for the formaldehyde-modified rice starch were greater than for the untreated and defatted sample. However, the ∆Hs for the Tg of the cross-linked rice starch samples were higher than for the untreated sample. On the other hand, the melting peaks of the formaldehyde-treated rice starch samples occurred at really lesser temperatures compared to the untreated sample. Cross-linking leads to an increase in the decomposition temperature, and this is a result of the formation of a stronger network of intra- and inter-molecular bonds, which were formed on the of the cross-linking agents (Dhull et al. 2021). For the sucrose-modified rice starch, the onset and end set Tg occurred at higher temperatures, at 2.5% sucrose concentration than for the untreated but defatted sample. However, the onset and end set Tg occurred at lower temperatures when the sucrose concentration was increased to 5%. The ∆Hs for the Tg were also higher for the sucrose-modified rice starch than for the untreated sample. However, the melting peak of the untreated but defatted rice starch was significantly higher than for the sucrose-modified samples. In comparison with the formaldehyde-modified rice starch, the sucrose-modified had higher onset and end set temperatures for the Tgs. Varied results were obtained for the ∆Hs and the melting peak temperatures. The overall results obtained for the various modified starches showed that the chemical (cross-linking) agents used had effects on their original molecular conformations of the native samples though the amorphous and crystalline structures were still present as indicated by the glass and melting transitions which has also been observed in other studies (Hassan et al. 2020; Franco-Bacca et al. 2021). The changes were due to the cross-linking of the starch moieties by the functional groups in the chemical agents used. The extent of changes might partly be attributed to the level of amorphosity and crystallinity of the original starch molecules. The resultant effect is that new starch motifs were produced with improved or new functionalities as pharmaceutical excipients. This study reveals that various concentrations of sucrose and formaldehyde had some influence on the properties of the native starches. The cross-linking agents raised the surface activity of the starch molecules by generating a change in conformation of the molecules at the interface, resulting in an increase in viscosity and other physicochemical properties. In general, the cross-linking agents denatured the starches, leading to the formation of new starch motifs with improved or new functionalities, which may increase their appropriateness as pharmaceutical excipients in tableting. All data and material are available in the manuscript. DSC: Differential scanning calorimetry MCC: Adejumo SA, Oli AN, Okoye EI, Nwakile CD, Ojiako CM, Okezie UM, Okeke IJ, Ofomata CM, Attama AA, Okoyeh JN, Esimone CO (2021) Biosurfactant production using mutant strains of Pseudomonas aeruginosa and Bacillus subtilis from Agro-industrial Wastes. Adv Pharm Bull 11(3):543–556. https://doi.org/10.34172/apb.2021.063 Adetunji OA (2019) Chemically modified starches as excipients in pharmaceutical dosage forms. In: Chemical properties of starch. IntechOpen, p 3 Adjei FK, Osei YA, Kuntworbe N, Ofori-Kwakye K (2017) Evaluation of the disintegrant properties of native starches of five new cassava varieties in Paracetamol tablet formulations. J Pharm (cairo) 2017:2326912. https://doi.org/10.1155/2017/2326912 Article CAS PubMed Central Google Scholar Agyepong JK, Barimah J (2018) Physicochemical properties of starches extracted from local cassava varieties with the aid of crude pectolytic enzymes from Saccharomyces cerevisiae (ATCC 52712). Afr J Food Sci 12(7):151–164 Alabi CO, Singh I, Odeku OA (2018) Evaluation of natural and pregelatinized forms of three tropical starches as excipients in tramadol tablet formulation. J Pharm Investig 48(3):333–340 Allamneni NA, Suresh JN (2014) Co-Processed Excipients as a new generation excipients with multifunctional activities: an overview. Indian J Pharm Sci Res 4(1):22–25 Ashogbon AO, Akintayo ET (2012) Morphological, functional and pasting properties of starches separated from rice cultivars grown in Nigeria. Int Food Res J 19(2):665–671 Awolu OO, Olofinlae SJ (2016) Physico-chemical, functional and pasting properties of native and chemically modified water yam (Dioscoreaalata) starch and production of water yam starch-based yoghurt. Starch-Stärke 68(7–8):719–726 Awolu OO, Odoro JW, Adeloye JB, Lawal OM (2020) Physicochemical evaluation and Fourier transform infrared spectroscopy characterization of quality protein maize starch subjected to different modifications. J Food Sci 85(10):3052–3060. https://doi.org/10.1111/1750-3841.15391 Ayoub AS, Rizvi SS (2009) An overview on the technology of cross-linking of starch for nonfood applications. J Plast Film Sheeting 25(1):25–45 Bajer D, Burkowska-But A (2021) Innovative and environmentally safe composites based on starch modified with dialdehyde starch, caffeine, or ascorbic acid for applications in the food packaging industry. Food Chem 374:131639. https://doi.org/10.1016/j.foodchem.2021.131639 Belibi PC, Daou TJ, Ndjaka JM, Nsom B, Michelin L, Durand B (2014) A comparative study of some properties of cassava and tree cassava starch films. Phys Procedia 1(55):220–226 BeMiller JN, Whistler RL (eds) (2009) Starch: chemistry and technology. Academic Press Carmona-Garcia R, Sanchez-Rivera MM, Méndez-Montealvo G, Garza-Montoya B, Bello-Pérez LA (2009) Effect of the cross-linked reagent type on some morphological, physicochemical and functional characteristics of banana starch (Musa paradisiaca). Carbohydr Polym 76(1):117–122 Dhull SB, Bangar SP, Deswal R, Dhandhi P, Kumar M, Trif M, Rusu A (2021) Development and characterization of active native and cross-linked pearl millet starch-based film loaded with Fenugreek oil. Foods 10(12):3097. https://doi.org/10.3390/foods10123097.PMID:34945648;PMCID:PMC8700877 Elgaied-Lamouchi D, Descamps N, Lefèvre P et al (2020) Robustness of controlled release tablets based on a cross-linked pregelatinized potato starch matrix. AAPS PharmSciTech 21:148. https://doi.org/10.1208/s12249-020-01674-4 Elgaied-Lamouchi D, Descamps N, Lefevre P, Rambur I, Pierquin JY, Siepmann F, Siepmann J, Muschert S (2021) Starch-based controlled release matrix tablets: impact of the type of starch. J Drug Deliv Sci Technol 61:102152. ISSN: 1773-2247. https://doi.org/10.1016/j.jddst.2020.102152 Franco-Bacca AP, Cervantes-Alvarez F, Macías JD, Castro-Betancur JA, Pérez-Blanco RJ, Giraldo Osorio OH, Arias Duque NP, Rodríguez-Gattorno G, Alvarado-Gil JJ (2021) Heat transfer in cassava starch biopolymers: effect of the addition of borax. Polymers (basel) 13(23):4106. https://doi.org/10.3390/polym13234106 Garcia MA, Garcia CF, Faraco AA (2020) Pharmaceutical and biomedical applications of native and modified starch: a review. Starch-Stärke 72(7–8):1900270 Gonenc I, Us F (2019) Effect of glutaraldehyde crosslinking on degree of substitution, thermal, structural, and physicochemical properties of corn starch. Starch-Stärke 71(3–4):1800046 Hassan MM, Tucker N, Le Guen MJ (2020) Thermal, mechanical and viscoelastic properties of citric acid-crosslinked starch/cellulose composite foams. Carbohydr Polym 230:115675 Haub MD, Hubach KL, Al-Tamimi EK, Ornelas S, Seib PA (2010) Different types of resistant starch elicit different glucose reponses in humans. J Nutr Metab 2010:230501. https://doi.org/10.1155/2010/230501 Hoover R, Hughes T, Chung HJ, Liu Q (2010) Composition, molecular structure, properties, and modification of pulse starches: a review. Food Res Int 43(2):399–413 Ibezim EC, Andrade CT (2006) Properties of maize (Amidex®) starch crosslinked by pregelatinisation with sodium trimetaphosphate: II. Flow behaviours and goniometry. Bio-Research 4(2):135–142 Ibrahim NA, Nada AA, Eid BM (2018) Polysaccharide-based polymer gels and their potential applications. In: Thakur V, Thakur M (eds) Polymer gels. Gels horizons: from science to smart materials. Springer, Singapore. https://doi.org/10.1007/978-981-10-6083-0_4 Jankovi B (2010) Thermal stability investigation and the kinetic study of Folnak degradation process under non isothermal conditions. AAPS PharmSciTech 11(1):103–112. https://doi.org/10.1208/s12249-009-9363-6 Jubril I, Muazu J, Mohammed GT (2012) Effects of phosphate modified and pregelatinized sweet potato starches on disintegrant property of paracetamol tablet formulations. J Appl Pharm Sci 2(2):32 Khalid GM, Musa H, Olowosulu AK (2016) Evaluation of filler/binder properties of modified starches derived from plectranthusesculentus by direct compression in Metronidazole tablet formulations. Pharm Anal Acta 7(1):74 Kolawole SA, Igwemmar NC, Bello HA (2013) Comparison of the physicochemical properties of starch from ginger (Zingiberofficinale) and maize (Zea mays). Int J Sci Res 2(11):71–76 Lawal MV (2019) Modified starches as direct compression excipients—effect of physical and chemical modifications on tablet properties: a review. Starch-Stärke 71(1–2):1800040 Le Thanh-Blicharz J, Lewandowicz J, Małyszek Z, Kowalczewski PŁ, Walkowiak K, Masewicz Ł, Baranowska HM (2021) Water behavior of aerogels obtained from chemically modified potato starches during hydration. Foods 10(11):2724. https://doi.org/10.3390/foods10112724 Mohammed KG (2017) Modified starch and its potentials as excipient in pharmaceutical formulations. Novel Approaches Drug Des Dev 1(1):1–4 MathSciNet Google Scholar Muazu J, Girbo A, Usman A, Mohammed GT (2012) Preliminary studies on Hausa potato starch I: the disintegrant properties. J Pharm Sci Technol 4(3):883–891 Neelam K, Vijay S, Lalit S (2012) Various techniques for the modification of starch and the applications of its derivatives. Int Res J Pharm 3(5):25–31 Odeku OA, Awe OO, Popoola B, Odeniyi MA, Itiola OA (2005) Compression and mechanical properties of tablet formulations: containing corn, sweet potato, and cocoyam starches as binders. Pharm Technol (2003) 29(4):82–90 Ofner CM III, Schott H (1986) Swelling studies of gelatin I: gelatin without additives. J Pharm Sci 75(8):790–796 Okeke IJ, Oli AN, Yahaya ZS, Gugu TH, Ibezim EC (2021) Disintegration, hardness and dissolution profiles of Paracetamol tablets formulated using sucrose and formaldehyde cross linked starches. J Pharm Res Int 33(60B):478–485 Okunlola A, Odeku OA (2011) Evaluation of starches obtained from four Dioscorea species as binding agent in chloroquine phosphate tablet formulations. Saudi Pharm J 19(2):95–105 Oladunmoye OO, Aworh OC, Maziya-Dixon B, Erukainure OL, Elemo GN (2014) Chemical and functional properties of cassava starch, durum wheat semolina flour, and their blends. Food Sci Nutr 2(2):132–138 Olayemi OJ, Apeji YE, Isimi CY (2020) Formulation and evaluation of Cyperus esculentus (Tiger Nut) starch-alginate microbeads in the oral delivery of ibuprofen. J Pharm Innov 22:1 Onochie AU, Oli AH, Oli AN, Ezeigwe OC, Nwaka AC, Okani CO, Okam PC, Ihekwereme CP, Okoyeh JN (2020) the pharmacobiochemical effects of ethanol extract of Justiciasecunda Vahl leaves in RattusNorvegicus. J ExpPharmacol 2(12):423–437. https://doi.org/10.2147/JEP.S267443 Pérez S, Bertoft E (2010) The molecular structures of starch components and their contribution to the architecture of starch granules: a comprehensive review. Starch-Stärke 62(8):389–420 Persson AS, Alderborn G (2018) A hybrid approach to predict the relationship between tablet tensile strength and compaction pressure using analytical powder compression. Eur J Pharm Biopharm 125:28–37. https://doi.org/10.1016/j.ejpb.2017.12.011 Pharmacopoeia B (2002) Her majesty's stationery office: London, 1988; Vol II. Anal Chem 74(1):197 Piriyaprasarth S, Patomchaiviwat V, Sriamornsak P, Seangpongchawal N, Katewongsa P, Akeuru P, Srijarreon P, Suttiphratya P (2010) Evaluation of yam (Dioscorea sp.) starch and arrowroot (Marantaarundinacea) starch as suspending agent in suspension. In: Advanced materials research, vol 93. Trans Tech Publications Ltd, pp 362–365 Sapsford KE, Algar WR, Berti L, Gemmill KB, Casey BJ, Oh E, Stewart MH, Medintz IL (2013) Functionalizing nanoparticles with biological molecules: developing chemistries that facilitate nanotechnology. Chem Rev 113(3):1904–2074 Senanayake S, Gunaratne A, Ranaweera KK, Bamunuarachchi A (2013) Effect of heat moisture treatment conditions on swelling power and water soluble index of different cultivars of sweet potato (Ipomeabatatas (L.) Lam) starch. Int Sch Res Not 2013:1–4 Shah N, Mewada RK, Mehta T (2016) Crosslinking of starch and its effect on viscosity behaviour. Rev Chem Eng 32(2):265–270 Shen Y, Zhang N, Xu Y, Huang J, Wu D, Shu X (2019) Physicochemical properties of hydroxypropylated and cross-linked rice starches differential in amylose content. Int J Biol Macromol 1(128):775–781 Siroha AK, Sandhu KS (2018) Physicochemical, rheological, morphological, and in vitro digestibility properties of cross-linked starch from pearl millet cultivars. Int J Food Prop 21(1):1371–1385 Šuput D, Lazi CV, Pezo L, Markov S, Vaštag Ž, Popovi CL, Radulovi CA, Ostojic S, Zlatanovic S, Popovi CS (2016) Characterization of starch edible films with different essential oils addition. Pol J Food Nutr Sci 66:277–285 Tesfay D, Abrha S, Yilma Z, Woldu G, Molla F (2020) Preparation, optimization, and evaluation of epichlorohydrin cross-linked enset (Enseteventricosum (Welw.) Cheeseman) starch as drug release sustaining excipient in microsphere formulation. Biomed Res Int 4(2020):2147971. https://doi.org/10.1155/2020/2147971 Umida-Khodjaeva TB (2013) Food additives as important part of functional food. J Microbiol Biotechnol 56:2125–2135 Yu S, Liu J, Yang Y, Ren J, Zheng X, Kopparapu NK (2016) Effects of amylose content on the physicochemical properties of Chinese chestnut starch. Starch-Stärke 68(1–2):112–118 Zhang Y, Kou R, Lv S, Zhu L, Tan H, Gu J, Cao J (2015) Effect of mesh number of wood powder and ratio of raw materials on properties of composite material of starch/wood powder. BioResources 10(3):5356–5368 The gracious provision of space and equipment by the managements of National Institute For Pharmaceutical Research and Development (NIPRID), Abuja, and The National Agency for Food and Drug Administration and Control (NAFDAC) Laboratory, Agulu, made this work possible. Department of Pharmaceutics and Pharmaceutical Technology, Faculty of Pharmaceutical Sciences, Nnamdi Azikiwe University, Awka, Nigeria Ifeanyi Justin Okeke Department of Pharmaceutics, Faculty of Pharmaceutical Sciences, University of Nigeria, Nsukka, Nigeria Ifeanyi Justin Okeke & Emmanuel Chinedum Ibezim Department of Pharmaceutical Microbiology, Faculty of Pharmaceutical Sciences, Nnamdi Azikiwe University, Awka, Nigeria Angus Nnamdi Oli & Chioma Miracle Ojiako Department of Pharmaceutical Microbiology and Biotechnology, Faculty of Pharmaceutical Sciences, Federal University Oye-Ekiti, Ekiti State, Nigeria Chioma Miracle Ojiako Department of Biology and Clinical Laboratory Science, Division of Arts and Sciences, Neumann University, One Neumann Drive, Aston, PA, 19014-1298, USA Jude N. Okoyeh Angus Nnamdi Oli Emmanuel Chinedum Ibezim ECI conceived and designed the work. IJO participated in the design of the work and carried out laboratory work and data collection. ANO and CMO drafted the manuscript. JNO read the work for intellectual content. All authors read and approved the final manuscript. Correspondence to Chioma Miracle Ojiako. Local, national or international guidelines and legislation Not applicable for starches from maize, wheat and rice. Okeke, I.J., Oli, A.N., Ojiako, C.M. et al. Sucrose- and formaldehyde-modified native starches as possible pharmaceutical excipients in tableting. Bull Natl Res Cent 46, 63 (2022). https://doi.org/10.1186/s42269-022-00748-6 Cross-link Pharmaceutical Industries
CommonCrawl
The Measurement of Electromagnetic Wave in Power Cable Tunnel of Underground Utility Tunnel Kang, Dae Kon;Park, Jai Hak 1 https://doi.org/10.14346/JKOSOS.2019.34.1.1 PDF Electromagnetic measurements of the power cable tunnel were conducted from August 10 to 20, 2018, in the ${\bigcirc}{\bigcirc}$ city underground utility tunnel. During this period, the average temperature was $31.89^{\circ}C$ and the humidity was 67.56% in power cable tunnel. As a result of the electromagnetic measurement, the highest electric field was 25.3 V/m and the magnetic flux density was $42.6{\mu}T$. The average electric field was 18.56 V/m and the magnetic flux density was $29.32{\mu}T$ in the power cable tunnel. As a result of comparison with the electric equipment technical standard, the electric field in the power cable tunnel was 0.5% of the electric equipment standard and 35.2% of the magnetic flux density. It's similar value that electric field is about robotic vacuum(15.53 V/m), and magnetic flux density is similar value about capsule- type coffee machine ($23.07{\mu}T$). The number of cable lines and the size of the electromagnetic waves were not proportional to each other through comparison of cable lines in the power cable tunnel. It was confirmed that 154 kV, rather than 22.9 kV, could have a greater influence on occupational. Lifetime Prediction on PVC Insulation Material for IV and HIV Insulated Wire Park, Hyung-Ju 8 Weight and elongation changes of IV and HIV insulations were measured simultaneously at several given temperature of $80^{\circ}C$, $90^{\circ}C$ and $100^{\circ}C$. And the lifetime was predicted using the Arrhenius model. Based on the initial weight values, a 50% elongation reduction was seen at 6.96% for the IV insulation and 10.29% for the HIV insulation. The activation energy from the slope of the lifetime regression equation was calculated as 92.895 kJ/mol(0.9632 eV) for the IV insulation and 95.213 kJ/mol(0.9873 eV) for the HIV insulation. Also, the expected lifetime at the operating temperature of $30^{\circ}C$ to $90^{\circ}C$ is 2.02 to 94.32 years, and longer lifetime was predicted on HIV insulated wires than on IV insulated wires. As a result, it was found that the thermal characteristics of the HIV insulated wires were about 12.44% better than those of IV insulated wires under the same conditions of use. Safety Assessment for PCS of Photovoltaic and Energy Storage System Applying FTA Kim, Doo-Hyun;Kim, Sung-Chul;Kim, Eui-Sik;Nam, Ki-Gong;Jeong, Cheon-Kee 14 https://doi.org/10.14346/JKOSOS.2019.34.1.14 PDF This paper presents a safety assessment based approach for the safe operation for PCS(Power Conditioning System) of photovoltaic and energy storage systems, applying FTA. The approach established top events as power outage and a failure likely to cause the largest damage among the potential risks of PCS. Then the Minimal Cut Set (MCS) and the importance of basic events were analyzed for implementing risk assessment. To cope with the objects, the components and their functions of PCS were categorized. To calculate the MCS frequency based on IEEE J Photovolt 2013, IEEE Std. 493-2007 and RAC (EPRD, NPRD), the failure rate and failure mode were produced regarding the basic events. In order to analyze the top event of failure and power outage, it was assumed that failures occurred in DC breaker, AC breaker, SMPS, DC filter, Inverter, CT, PT, DSP board, HMI, AC reactor, MC and EMI filter and Fault Tree was drawn. It is expected that the MCS and the importance of basic event resulting from this study will help find and remove the causes of failure and power outage in PCS for efficient safety management. Development of Hardware for Controlling Abnormal Temperature in PCS of Photovoltaic System Kim, Doo-Hyun;Kim, Sung-Chul;Kim, Yoon-Bok 21 This paper is purposed to develop hardware for controlling abnormal temperature that can occur environment and component itself in PCS. In order to be purpose, the hardware which is four part(sensing, PLC, monitoring and output) keep detecting temperature for critical components of PCS and can control the abnormal temperature. Apply to the hardware, it is selected to PV power generation facilities of 20 kW in Cheong-ju city and measured the data for one year in 2017. Through the temperature data, it is found critical components of four(discharge resistance, DC capacitor, IGBT, DSP board) and entered the setting value for operating the fan. The setting values for operating the fan are up to $130^{\circ}C$ in discharge resistance, $60^{\circ}C$ in DC capacitor, $55^{\circ}C$ in IGBT and DSP board. The hardware is installed at the same PCS(20 kW in Cheong-ju city) in 2018 and the power generation output is analyzed for the five days with the highest atmospheric temperature(Clear day) in July and August in 2017 and 2018 years. Therefore, the power generation output of the PV system with hardware increased up to 4 kWh. Emission Characteristics of Gasoline/ethanol Mixed Fuels for Vehicle Fire Safety Design Kim, Shin Woo;Lee, Eui Ju 27 Combustion characteristics of gasoline/ethanol fuel were investigated both numerically and experimentally for vehicle fire safety. The numerical simulation was performed on the well-stirred reactor (WSR) to simulate the homogeneous gasoline engine and to clarify the effect of ethanol addition in the gasoline fuel. The simulating cases with three independent variables, i.e. ethanol mole fraction, equivalence ratio and residence time, were designed to predict and optimized systematically based on the response surface method (RSM). The results of stoichiometric gasoline surrogate show that the auto-ignition temperature increases but NOx yields decrease with increasing ethanol mole fraction. This implies that the bioethanol added gasoline is an eco-friendly fuel on engine running condition. However, unburned hydrocarbon is increased dramatically with increasing ethanol content, which results from the incomplete combustion and hence need to adjust combustion itself rather than an after-treatment system. For more tangible understanding of gasoline/ethanol fuel on pollutant emissions, experimental measurements of combustion products were performed in gasoline/ethanol pool fires in the cup burner. The results show that soot yield by gravimetric sampling was decreased dramatically as ethanol was added, but NOx emission was almost comparable regardless of ethanol mole fraction. For soot morphology by TEM sampling, the incipient soot such as a liquid like PAHs was observed clearly on the soot of higher ethanol containing gasoline, and the soot might be matured under the undiluted gasoline fuel. Flash Point Measurement of n-Propanol+n-Hexanol and n-Butanol+n-Hexanol Systems Using Seta Flash Closed Cup Tester Ha, Dong-Myeong;Lee, Sungjin 34 Flash point is the important indicator to determine fire and explosion hazards of liquid solutions. In this study, flash points of n-propanol+n-hexanol and n-butanol+n-hexanol systems were obtained by Seta flash tester. The methods based on UNIFAC equation and multiple regression analysis were used to calculate flash point. The calculated flash point was compared with the experimental flash point. Absolute average errors of flash points calculated by UNIFAC equation are $2.9^{\circ}C$ and $0.6^{\circ}C$ for n-propanol+n-hexanol and n-butanol+n-hexanol, respectively. Absolute average errors of flash points calculated by multiple regression analysis are $0.5^{\circ}C$ and $0.2^{\circ}C$ for n-propanol+ n-hexanol and n-butanol+n-hexanol, respectively. As can be seen from AAE, the values calculated by multiple regression analysis are noticed to be better than the values by the method based on UNIFAC eauation. Experimental Studies of the Explosion Characteristics by Varying Concentrations of a Multi Layered Water Gel Barrier Ha, Dae Il;Park, Dal Jae 40 Experimental studies have been carried out to investigate characteristics of gas explosion using a multi layered water gel barrier in a vented explosion chamber. The chamber is consisted of 1600 mm in length, with a square cross-section of $100{\times}100mm^2$. The gel concentration of inner layer of MLWGB ranged from 10% to 90% with intervals of 10% by weight of gel. Displacement of the MLWGB was photographed with a measured using a high-speed video camera, and pressure development was measured using a data acquisition system. It was found that MLWGBs with 10 ~ 20% inner layer concentrations were ruptured during the explosions. As the concentrations of inner layer increased from 30% to 90%, the barriers were not ruptured. As the gel concentrations of the inner layer increased, the displacement increased toward the chamber exit and the pressure decreased for the ruptured barriers. It was found that the pressure attenuation obtained from the MLWGB was higher than that of the single water gel barrier. For the cases of non-ruptured barriers, the pressure inside the chamber less increased with increasing gel concentrations of the inner layer. It was also found that the displacement moved back into the chamber for non-ruptured MLWGBs, and it was sensitive to the gel concentrations. Development of Accident Cause Analysis Model for Construction Site Lim, Won Jun;Kee, Jung Hun;Seong, Joo Hyun;Park, Jong Yil 45 Accident analysis models were developed to improve the construction site safety and case studies was conducted. In 2016, 86% of fatality accidents occurred due to simple unsafe acts. Structure related accidents are less frequent than the non structure related causes, but the number of casualties per accident is two times higher than non structure one. In the view of risk perception, efforts should be given to reduce accidents caused by low frequency - high consequence structure related causes. In case of structure related accident, structural safety inspection and management (including quality), ground condition management / inspection technology, and provision of risk information delivery system in case of non structure related accident were proposed as a solution. In analysis of relationship between safety related stakeholder, the main problem were the lack of knowledge of controller and player, loss of control due to duplicated controls, lack of communication system of risk information, and relative position error of controller and player. Reliability Analysis of Three-Dimensional Temporary Shoring Structures Considering Bracing Member and Member Connection Condition Ryu, Seon-Ho;Ok, Seung-Yong;Kim, Seung-Min 53 This study performs reliability analysis of three-dimensional temporary shoring structures with three different models. The first model represents a field model which does not have diagonal bracing members. The installation of bracing members is often neglected in the field for convenience. The second model corresponds to a design model which has the bracing members with the hinge connection of horizontal and bracing members at joints. The third model is similar to the second model but the hinge connection is replaced with partial rotational stiffness. The reliability analysis results revealed that the vertical members of the three models are safe enough in terms of axial force, but the vertical and horizontal members exhibit a big difference among the three models in terms of combination stress of axial force and bi-axial bending moments. The field model showed significant increase in failure probability for the horizontal member, and thus the results demonstrate that the bracing member should be installed necessarily for the safety of the temporary shoring structures. A Case Study on the Potential Severity Assessment for Incident Investigation in the Shipbuilding Industry Ye, Jeong Hyun;Jung, Seung Rae;Chang, Seong Rok 62 Korean shipbuilding companies have taken many efforts for safety over the years by developing Health, Safety & Environment (HSE) Management Systems, Procedures, Training, and studying Programs for prevention of incidents. As a result, the shipbuilding industry has succeeded in reducing overall injury rates. Nevertheless, the industry also noticed that incident rates are still not at zero and more importantly, serious injuries and fatalities are still occurring. One factor that may be attributing to this is the lack of managing potential severity during incident investigations, most incident investigations are implemented based on the actual result. Generally, each shipbuilding company develops their customized incident investigation programs and these are also commonly being focused on actual result. This study aimed to develop a shift in strategy toward safety to classify the criteria of potential severity from any incidents and manage that to prevent any recurrence or causing any serious injuries or fatalities in the shipbuilding industry. Several global energy companies have already developed potential severity management tools and applied them in their incident investigations. In order to verify the necessity of improvement for current systems, a case study and comparative analysis between a domestic shipbuilding company and several global energy companies from foreign countries was implemented and comparison of two incident investigation cases from specific offshore projects was conducted to measure the value of a potential severity system. Also, a checklist was established from the data of fatalities and serious injuries in recent 5 years that occurred in Korea shipbuilding industry and a proposal to verify high potential incidents in the incident investigation process and comparative analysis between the assessment by appling proposed checklist and the assessment from a global energy company by using their own system was implemented. As a measure to prevent any incidents, it is required to focus on potential severity assessment during the incident investigation rather than to only control actual result. Hence, this study aims to propose a realistic plan which enables to improve the existing practices of incident investigation and control in the shipbuilding industry. A Study of the Reaction Time on Older Driver Lee, Dae Hee;Park, Jin Soo 70 The aging of society is not only Korea's challenge but also a global issue. This leads to an increase in the number of elderly drivers who are much more prone to major traffic accidents. Therefore, we need a system to test aged drivers' driving abilities. As part of efforts to establish such a system, researchers of KoROAD. have conducted a study on the correlation between aging and driving abilities by analyzing old drivers' reaction time. The study shows that there was a sharp increase in reaction time for drivers aged 65 and over. The current reaction time of 2.5 secs for the 85 percent of eligible drivers needs to be revised upward in aging societies. From now on, we need to come up with the traffic safety measures that can deal with the issue of drivers of old age. A Study on the Prevention Measures of Human Error with Railway Drivers Kim, Dong Won;Song, Bo Young;Lee, Hi Sung 76 In this study, the causes of human error were identified through the survey of the drivers of the three organizations: Seoul Metro, Seoul Metropolitan Rapid Transit Corporation, and Korail. It was started with the aim of finding and eliciting causes in various directions including human factors, job factors, and environmental factors. The Cronbach alpha value was 0.95 for the reliability significance of the stress-induced factors in the operational area. The significance probability for organisational factors was shown to be 0.82, and the significance of the sub-accident experience and the driving skill factors in operation was 0.81 In addition, the analysis results showed that stress-induced in the field of driving is higher than the human factors in the reliability analysis. The results of the analysis confirmed that the reliability of the organizational and operational stress-induced factors was higher than other causes. In order to reduce urban railroad accidents, this paper suggests a method for operating safe urban railroad through the minimization human errors. Effect of Providing Detection Information on Improving Signal Detection Performance: Applying Simulated Baggage Screening Program Lim, Sung Jun;Choi, Jihan;Lee, Jidong;Ahn, Ji Yeon;Moon, Kwangsu 82 The importance of aviation safety has been emphasized recently due to the development of aviation industry. Despite the efforts of each country and the improvement of screening equipment, screening tasks are still difficult and detection failures are frequent. The purpose of this study was to examine the effect of feedback on improving signal detection performance applying a Simulated Baggage Screening Program(SBSP) for improving aviation safety. SBSP consists of three parts: image combination, option setting and experiment. The experimental images were color-coded to reflect the items' transmittance of the x-rays and could be combined as researchers' need. In the option, the researcher could set up the information, incentive, and comments needed for training to be delivered on a number of tasks and times. Experiment was conducted using SBSP and participant's performance information (hit, missed, false alarms, correct rejection, reaction time, etc.) was automatically calculated and stored. A total of 50 participants participated and each participant was randomly assigned to feedback and non-feedback group. Participants performed a total of 200 tasks and 20(10%) contained target object(gun and knife). The results showed that when the feedback was provided, the hit, correct rejection ratio and d′ were increased, however, the false alarms and miss decreased. However, there was no significant difference in response criteria(${\beta}$). In addition, implications, limitations of this study and future research were discussed. A Experimental Research on Stair Ascent Evacuation Support for Vulnerable People Lee, Ji Hyang;Lee, Hyo Jeong;Kwon, Jin Suk;Park, Sang Hyun 90 This study is aiming to compare stair ascent transportation speed and physical burden of evacuation supporters according to the types of stair ascent transportation for vulnerable people experimentally. In this study, we measured heart rate of the supporters to indicate physical burden during the transportation. The subjects of this experiment were male students, age of 20-26. Experimental conditions were the ways of stair transportation and the weight of vulnerable people. The types of stair transportation were giving a piggyback ride and carrying a wheelchair. Each experimental trial was video-recorded for measurement of ascent speed and observing supporters movement. As a result of the experiment, as for the ascent transportation speed by piggyback ride from the first floor to the fourth floor, the average speed of the light case is 31 seconds and for the heavy case is 43 seconds. When it comes to the average speed of wheelchair transportation's average speed the light case is 1 minute and 11 seconds and the heavy case is 1 minute and 49 seconds. Therefore, it was indicated that when the weight of a vulnerable people is lighter, the transportation speed is faster. The heart rates of evacuation supporters are different depending on transportation methods or individual's condition but as repetitive transportation increases, they tend to reach the maximum heart rates. Improvement of Act on Disaster and Safety for Persons with Disabilities Jung, Taeho;Yun, Nuri;Park, Dugkeun 98 In the case of a disaster, the damage caused by the disability vulnerability of persons with disabilities is significantly increased, discussing about a problem for the protection of the vulnerable class. However, until now policy, technology and response guidelines for disaster and safety have been focused on the general public. Therefore, it is necessary to develop customized support technology for disaster and safety considering vulnerable characteristics of vulnerable class. Firstly, it is necessary to prepare draft improvement proposal of act and support system related disaster and safety for persons with disabilities and older persons. So, this study was carried out analysis of act, policy and support system on disaster and safety for persons with disabilities and older persons of domestic and overseas in order to draw implication. Furthermore, we established direction for improvement of act and policy on disaster and safety for persons with disabilities based on the analysis, and suggested draft improvement proposal. A Study on the Estimation and Verification of the Availability of the Unmanned Light Railway Kwon, Sang Don;Song, Bo Young;Lee, Hi Sung 108 https://doi.org/10.14346/JKOSOS.2019.34.1.108 PDF Unattended Train operation(UTO) requires higher safety target than other systems, since all train operations are automatic. The system provider to deliver without accident or failure, and the operator to transport passengers without accident by putting all trains supplied, including them, into service. Safety rates without such failures can be represented as indicators of RAMS, among which availability is continuously controllable to achieve the target, with a clear target. Availability is often required by the licensee from the initial stage of the project to demonstrate that the request for proposal (RFP) is usually specified and to maintain separate availability targets at the operational stage. In particular, unlike unmanned operation light rail in complex systems, simple formulas are often presented to facilitate verification at each stage. This paper presents this method of usability calculation in an integrated manner at all levels and analyzes the existing usability values to ensure reliability of the availability formula for integrated use in unmanned light rail systems. A Study on the Improvement of the Safety Insurance for the Laboratory at the Korean Worker's Compensation Insurance - Focusing on Disability Benefit Pension Type Payment - Song, H.S.;Yee, N.H.;Choi, J.G.;Chun, S.H.;Kim, Jai Jung;Lee, B.H. 115 Background: Due to the diversification and advancement of research, researchers have become to deal with a variety of chemical and biological harmful materials in the laboratories of universities and research institutes and the risk has increased as well. Therefore, it is necessary to strengthen the social safety net for laboratory accidents by strengthening the compensation to the level comparable to that of Korean Workers' Compensation & Welfare Service, when the researchers become physically disabled by laboratory accidents. The purpose of this study is to secure researchers' health rights and to create a research environment where researchers can work with confidence by strengthening the compensation to the level comparable to that of Korean Workers' Compensation & Welfare Service. Method: We analyzed the laboratory accidents by year, injury type, severity of accident and disability grade with the 6 year data from 2011 to 2016, provided by Laboratory Safety Insurance. Based on the analysis result, we predicted the financial impact on Laboratory Safety Insurance if we introduce a compensation annuity by disability grade which is similar to Injury-Disease Compensation Annuity of Korean Workers' Compensation & Welfare Service. Result :As of 2011, the insured number of Laboratory Safety Insurance was approximately 700,000. The Average premium per insured was KRW 3,339 and there were 158 claims. Total claim amount was KRW 130 million, whereas the premium was about KRW 2.3 billion. The loss ratio was very low at 5.75%. If we introduce a compensation annuity by disability grade similar to Injury-Disease Compensation Annuity of Korean Workers' Compensation & Welfare Service, the expected benefit amount for 1 case of disability grade 1 would be KRW 1.6 billion, assuming 2% of interest rate. Given current premium, the loss ratio, the ratio of premium income to claim payment, is expected 41.4% in 2017 and 151.6% in 2026. The increased loss ratio due to the introduce of the compensation annuity by disability grade is estimated to be 11.0% in 2017 and 40.4% in 2026. Conclusion: Currently, laboratories can purchase insurance companies' laboratory safety insurance that meets the standards prescribed by Act on the Establishment of Safe Laboratory Environment. However, if a compensation annuity is introduced, it would be difficult for insurance companies to operate the laboratory safety insurance due to financial losses from a large-scale accident. Therefore, it is desirable that one or designated entities operate laboratory safety insurance. We think that it is more desirable for laboratory safety insurance to be operated by a public entity rather than private entities.
CommonCrawl
Study protocol Assessing biomarkers and neuropsychological outcomes in rural populations exposed to organophosphate pesticides in Chile – study design and protocol Muriel Ramírez-Santana1, Liliana Zúñiga2, Sebastián Corral2,3, Rodrigo Sandoval2, Paul TJ Scheepers4, Koos Van der Velden5, Nel Roeleveld4,6 & Floria Pancetti2 BMC Public Health volume 15, Article number: 116 (2015) Cite this article Health effects of pesticides are easily diagnosed when acute poisonings occurs, nevertheless, consequences from chronic exposure can only be observed when neuropsychiatric, neurodegenerative or oncologic pathologies appear. Therefore, early monitoring of this type of exposure is especially relevant to avoid the consequences of pathologies previously described; especially concerning workers exposed to pesticides on the job. For acute organophosphate pesticides (OPP) exposure, two biomarkers have been validated: plasma cholinesterase (ChE) and acetylcholinesterase (AChE) from erythrocytes. These enzymes become inhibited when people are exposed to high doses of organophosphate pesticides, along with clear signs and symptoms of acute poisoning; therefore, they do not serve to identify risk from chronic exposure. This study aims to assess a novel biomarker that could reflect neuropsychological deterioration associated with long-term exposure to organophosphate pesticides via the enzyme acylpeptide-hydrolase (ACPH), which has been recently identified as a direct target of action for some organophosphate compounds. Methods/Design Three population groups were recruited during three years (2011–2013): Group I having no exposure to pesticides, which included people living in Chilean coastal areas far from farms (external control); Group II included those individuals living within the rural and farming area (internal control) but not occupationally exposed to pesticides; and Group III living in rural areas, employed in agricultural labour and having had direct contact with pesticides for more than five years. Blood samples to assess biomarkers were taken and neuropsychological evaluations carried out seasonally; in three time frames for the occupationally exposed group (before, during and after fumigation period); in two time frames for internal control group (before and during fumigation), and only once for the external controls. Neuropsychological evaluations considered cognitive functions, affectivity and psychomotor activity. The biomarkers measured included ChE, AChE and ACPH. Statistical analysis and mathematical modelling used both laboratory results and neuropsychological testing outcomes in order to assess whether ACPH would be acceptable as biomarker for chronic exposure to OPP. This study protocol has been implemented successfully during the time frames mentioned above for seasons 2011, 2012 and 2013–2014. Human exposure to organophosphate pesticides (OPP) have been extensively documented showing health problems, associated primarily with agricultural workers having occupational exposure in developing countries [1]. While acute poisonings are relatively easy to diagnose because they are accompanied with symptoms of cholinergic overstimulation [2], the effects of chronic, long-term exposure to low OPP doses only become evident when carcinogenic, teratogenic [3,4] or neurodegenerative pathologies appear [5-7]. The nervous system is particularly sensitive to the effects of OPP, therefore early bio-monitoring of neurotoxic effects in exposed people can prevent the onset of future neurodegenerative diseases by taking some measures to avoid or diminish the level of OPP exposure. The diagnosis of acute or chronic exposure to organophosphate pesticides (OPP) usually employs two different blood enzymes as biomarkers; plasma pseudocholinesterase (or butyrylcholinesterase, BuChE) and erythrocyte acetylcholinesterase (AChE), the later being the enzyme most used for estimating chronic exposure [8]. The catalytic activity of both these enzymes is inhibited by OPP and in the case of AChE inhibition, where the enzyme is expressed in the synapses of the nervous system, this inhibition reflects cholinergic overstimulation responsible for the signs and symptoms of OPP poisoning. Therefore, their usefulness as biomarkers of low-dose exposure to OPP is limited. Because of this, it is necessary to develop a more sensitive blood biomarker that account for long-term, low-dose exposure to OPP. There is much evidence relating low-level and prolonged OPP exposure with cognitive performance deterioration. Scientific literature reporting the effects of long-term exposure to OPP in cognitive processes strongly indicates that the impairment of cognitive or neurological processes correlates with the time of exposure to OPP [1,2]. Rohlman and collaborators [8] indicate that the appearance of this disorder does not always correlate with an inhibited cholinesterase activity, suggesting that the action of OPP depends on the type and burden of pesticides to which people are exposed. At the same time, it is important to mention that most studies have measured biomarkers and neuropsychological performance only once, not considering different fumigation seasons that allow for the possibility of reversibility on neuropsychological performance [9,10]. Methodological weaknesses of previous studies are related to examining different occupational groups with different levels and routes of exposure, having different time periods, low samples, and other epidemiological constraints that limit variables of exposure and health effects, among others [11]. Furthermore, existing biomarkers are not that sensitive and do not allow for measuring chronic exposure nor chronic effects [8]. Acylpeptide hydrolase (ACPH) is a non-cholinesterase target of OPP that seems to be involved in the effects of these molecules have on cognitive processes [12]. ACPH, also known as acylamino-acid releasing enzyme or acylaminoacyl peptidase, is a homomeric tetramer that belongs to the family of prolyl-oligopeptidase of the serine hydrolases [13] and catalyzes the hydrolysis of several peptides possessing an acylated N-terminal amino acid to generate an acylated amino acid and a free N-terminal peptide [14,15]. It has also been described as a truncated form of the enzyme having endopeptidase activity [16]. In mammals, ACPH acts in coordination with proteasome to clear cytotoxic denatured proteins from cells [17,18]. Strong inhibition of ACPH activity leads to apoptosis [19], and deletions in the gene encoding ACPH leading to deficiencies of this enzyme have been observed in renal and small-cell lung carcinomas [20,21]. Regarding the role of ACPH in the nervous system, it is known that ACPH is involved in the moderation of synaptic activity [22] and can be found localized in pre-synaptic compartments of the rat telencephalon [23]. Interestingly, it has been reported that ACPH can degrade monomers, dimers and trimers of the Aβ1–40 peptide [24,25]. Richards and collaborators reported that some OPP such as chlorpyrifos-methyl oxon, dichlorvos, and diisopropyl fluorophosphate (DFP) exhibit a higher affinity toward ACPH compared to AChE. Specifically, dichlorvos and DFP showed an increased affinity of 6.6 - 10.6 fold toward ACPH with respect to AChE [26]. On the other hand, it has been demonstrated in animal models that the inhibition of ACPH by the OPP dichlorvos had biphasic effects in the cellular mechanisms responsible for learning and memory, while low doses of dichlorvos had positive effects on synaptic plasticity processes, and high doses or prolonged exposure times have the opposite effect and are neurotoxic [12]. In spite of this, it has been described that chlorpiryfos-oxon, diazoxon, paraoxon and mipafox, among other organophosphate compounds, inhibit ACPH as well as AChE activity from erythrocytes. This lack of specificity is compensated for by the persistence of inhibition toward ACPH activity (more than four days) compared with the inhibition of erythrocyte AChE activity or plasma ChE activity, which has a half- life of 11 days [27,28]. These findings support the notion of the usefulness of erythrocyte ACPH activity as having high sensibility and being a reliable biomarker for monitoring chronic exposure to OPP [12] associated with cognitive deterioration. The main purpose of this study is to develop measurement of ACPH activity as novel erythrocyte biomarkers that will help identify early diagnosis of chronic exposure to OPP associated with neuropsychological impairment. The study design addresses some of the methodological concerns described by previous research [10,11,29]: the evaluation of more than one control group and measuring biomarker activity along with neuropsychological performance at different moments during the spraying season (before, during and after spraying with pesticides). Specific objectives ▪ To obtain activity profiles of the two blood enzyme biomarkers commonly measured. We used AChE and ChE and the new biomarker, ACPH, in three cohorts with different levels of exposure to OPP: occupational, environmental and with no known exposure. ▪ To obtain the neuropsychological performance profiles in the three cohorts described above and to assess the risk of cognitive impairment in these populations. ▪ To correlate the enzymatic activities of each of the three biomarkers with the cognitive status in the cohorts described above with different levels of OPP exposure. ▪ To analyse changes in the enzyme activities and/or in cognitive performances within occupationally and environmentally exposed cohorts that are dependent upon the fumigation period (before, during and after fumigation). ▪ To establish if ACPH activity is a suitable biomarker of long-term exposure to OPP associated with cognitive deterioration. Settings and target population The study was conducted between the fall of 2011 and the fall of 2014 in urban and rural locations of Coquimbo Region, in northern Chile. People from two urban locations (the cities of Coquimbo and La Serena) and four rural districts (La Higuera, Paihuano, Vicuña and Monte Patria) were recruited. The main agricultural activity is located at Paihuano, Vicuña and Monte Patria districts and is related to grapes and citrus farming. To be included in the study, the subjects had to fit the following criteria: between 18 and 50 years old, right-handed and without diagnosis of neurological or psychiatric illness. Three population groups were considered: Group I (External Control) individuals without environmental or occupational exposure to OPP and living in coastal locations; Group II (Internal Control) living in rural locations near farming activities and probably under environmental exposure; and Group III (Occupational Control) people occupationally exposed to pesticides and composed of agricultural labourers living in rural areas in direct contact with pesticides for more than 5 years. Within this third group there were blenders, fumigators, tractor drivers, supervisors, collectors and packing workers. Also, to be included in this group, individuals must never have suffered acute intoxication due to OPP. For all recruited individuals across the study, a baseline neuropsychological interview was done and a blood sample was taken. All procedures were accomplished in a mobile laboratory (an adapted Peugeot Boxer van, stationed permanently at the farms). All evaluations were carried out annually in three time frames for the occupationally exposed group (before, during and after fumigation period); in two time frames for internal control group (before and during fumigation) and a single time frame for the external controls. Exclusion criteria were: left-handedness, diagnosis of medical or psychiatric disease or disability and use of psychopharmacologic meds. For details, see Figure 1. Methodology chart: selected population groups, timeline evaluation and variables to measure. Ethical approval for the study was obtained from the Research Ethics Committee of the Universidad Católica del Norte (The Catholic University of the North, in Coquimbo, Chile), dated August 25, 2014. Informed consent was explained to voluntary participants and signed by them before their recruitment. Power and sample size estimation Two main objectives of the study were considered when calculating the number of participants to be recruited: a prevalence study (measuring enzymes activity and neuropsychological performance) and case control study (assessing ACPH as diagnostic test). The size of the occupationally exposed population in Coquimbo Region is 14,000 workers according to Ministry of Agriculture (2008) [30]. Considering that 10% of those workers have tasks involving direct use of pesticides (mixers, blenders, applicators), with a 95% exposure rate, a minimal sample size of 70 people was considered to be adequate for the prevalence study with 5% margin of error [31]. Secondly, for assessing the diagnostic test, the sample size was calculated based on the following equation [32]: $$ n=\frac{{\left[{Z}_{\alpha}\ast \sqrt{2p\left(1-p\right)}\kern0.5em +{Z}_{\beta}\ast \sqrt{p_1\left(1-{p}_1\right)+{p}_2\left(1-{p}_2\right)}\right]}^2}{{\left({p}_1-{p}_2\right)}^2} $$ This considers the diagnostic test assessment as a case–control study; the "cases" being those people having neuropsychological impairment, and the "controls" being those people considered "normal" in their neuropsychological performance. Occupationally exposed people were expected to have less ACPH activity than non-occupationally exposed people. Assuming that the ACPH activity is oriented to have high sensitivity (0.999) for non-exposed people and have 60% specificity for non-exposed people, the "n" for this scenario would be 77 individuals. Finally, considering that between 20 to 30 per cent of the volunteers could be lost to yearly follow-up, the minimum number of study participants recruited were 100 people per group. Identification and recruitment of participants Several meetings were held in different locations, where the field team explained the project and collected personal contact information from potential participants. Trained medical students employed a short questionnaire in order to eliminate people who did not fill full the enrolment criteria. The main questionnaire was then applied to those who met the requirements. A code was given to each participant, which was used for identification in the questionnaire, labelling blood samples and for neuropsychological tests results. The neuropsychological evaluation and blood samples were taken at the same time on a fixed date in agreement with the volunteers. Biomarkers evaluation In blood samples, the levels of three enzymes related to OPP exposure in the three population groups were assessed. The evaluations were done in the field, in a mobile laboratory especially equipped for this purpose. The enzymes considered were: plasmatic AChE, ChE and ACPH; and were measured as explained in the "Setting and Target Population" section. Blood samples collection, storage, and transportation A robust sampling and tracking system has was implemented to ensure both proper blood sample collection and survey data from each volunteer. Volunteers signed the consent form at the beginning of the process, after a detailed explanation of the project. Blood samples were collected by venepuncture in EDTA anticoagulant vacutainers, coded using a unique sample identification code and processed daily within 12 hours, keeping them at 4°C inside the mobile laboratory. Processing samples in the field consisted of separating plasma from cells by centrifugation (10 min, 3000 rpm, 4°C), after which the cell fraction was washed twice with cold PBS 1X and finally, each fraction (cells and plasma) was aliquot sequenced in three cryovial tubes labelled and frozen using liquid nitrogen. During transportation, the samples remained at −80°C until final analysis. The paramedic transporting and transferring all biological material to the technician kept a registration log of all collected specimens. Once in the lab, the technician checked and stored the samples. One aliquot of each faction was thawed before enzymatic measurement. All determinations were done in triplicate based on Ellman et al., (1961) method [33]. We used the spectrophotometer Specord type 205 (AnalitikaJena). Plasmatic Cholinesterase (ChE) was measured directly using non-diluted plasma. Enzymatic activity was normalized using the protein content in the assay, protein content was determined by the bicinchoninic acid method [34] and enzymatic activity from each fraction was expressed as mean ± SD. Erythrocytic Cholinesterase (AChE): To obtain the protein from broken erythrocytes, cells were lysed using dythiothreitol (1 mM). After centrifuge (10000 rpm, 30 min, 4°C) the supernatant was separated from the pellet and kept on ice to ACHP measurement. The pellet was washed once with cold phosphate buffer (0.05 M) and then measured in a reaction assay mixture (1.037 mL), which consisted of 5.5′-dithio-bis-2-nitrobenzoic acid (DTNB [0.241 mM]), acetylthiocholine (A-s-choline [0.029 mM]), disodium phosphate buffer (Na2HPO4 [0.31 mM]) and potassium phosphate buffer (KH2PO4 [0.023 mM]). Hydrolysis rate of acetylthiocholine is followed as indicated above (see plasma cholinesterase). Acyl peptide hydrolase (ACPH): The measurement was determined using Tris–HCl (7.4 pH [100 mM] with DTT [1 mM]. The reagent mixture (1.020 mL) consisted of N-acetyl-L-alanine p-nitroaniline (AANA [3,9 mM]), Tris–HCl buffer (7.4 pH [95.59 mM], dythiothreitol (DTT [0.96 mM]) and dymethylsulfoxide (DMSO [275.88 mM]. Hydrolysis rate of AANA was then followed spectrophotometrically by the formation of p-nitroanilide (ε410 = 8800 M−1 cm−1) and measured at 405 nm, at 37°C during 40 min. The enzymatic activity was normalized to the haemoglobin content in the original blood sample volume. Haemoglobin was then measured using the cyan-methaemoglobin method [35]. Briefly, blood samples were mixed with a solution containing ferricyanide and cyanide. Haemoglobin changing to cyan-methaemoglobin was then measured at 520 nm. In order to validate the replicability of our results, the National Institute of Public Health (Santiago, Chile) supported the quality control for plasmatic and erythrocytic cholinesterase. A random subset of samples were sent to the laboratory of Occupational Toxicology at the Department of Occupational Health in the Institute of Public Health, where plasmatic and erythrocytic cholinesterase activities were determined according to Ellman's method. The resulting data obtained at the Institute of Public Health were matched to the results of the same samples obtained in our laboratory and statistical comparison performed. This procedure was done for each of the cohorts being evaluated. Methods for neuropsychological evaluation In order to diagnose cognitive impairment, a Speech Therapist performed a psychological interviews and neuropsychological battery of tests for each volunteer. This battery covered three areas: cognitive functions, mood and psychomotor activity. We considered these three areas because the accumulation of acetylcholine in the synaptic cleft continuously stimulates the cholinergic synapses, triggering diverse symptoms in the neuro-conduct, cognitive and neuro-muscle areas. Table 1 shows the different cognitive functions and the tests used for their evaluation [36-39]. The time frame for a complete and individual evaluation was about three hours. The effects of fatigue on level of cognitive performance was addressed by beginning each evaluation with those tests most sensitive to fatigue, such as attention span, time of reaction and speed of process [40,41]. Additionally, a rest interval was included during the process. Table 1 Cognitive functions and associated tests The evaluations were all performed by a Speech Therapist trained and supervised by a board certified neuropsychologist; in this way, inter-assessor's bias is eliminated. All procedures are carried out in the mobile laboratory. To avoid a learning gap effect, tests that had a low learning component were selected and carried out at intervals of at least two months. In relation to assessment instruments, these were selected according to the following criteria: age of participants, reading and writing skills and absence of severe sensory deficits. It is necessary to indicate that, given the number of tests being applied and the time it took, we evaluated only two to three individuals a day. A baseline manual for measuring level of performance was used for each original set of tests, and scores of the exposed populations compared with scores of the control population. We calculated neuropsychological results clustered by area (cognitive functions: memory, attention span, constructive praxis; executive functions: mood and psychomotor activity) and by the average score given by test for each area. A unique final score will be calculated for each individual and evaluation time. Exposure characterization Trained medical students conducted interviews using a questionnaire to assess personal, medical, social and occupational conditions. This questionnaire covered a broad spectrum of information in order to avoid misperceptions, because most people involved in the study are farmers or fishermen with very basic educational levels. The different topics in the inquiry included: personal data (gender, age, occupation, address and years of study); consumption habits (tobacco, alcohol, drugs, medicines); family and personal medical history (including obstetric history for women, e.g. miscarriage or reproductive problems); occupational history, time of exposure to pesticides (years working and number of workdays per season each year); knowledge, training and use of safety measures at work. In order to correlate the level of exposure with the neuropsychological and enzymes outcomes, a single variable of occupational exposure was developed that considered the number of years living in the farming area (for internal control). For the workers, the variable was constructed using years living in farming area and years working with pesticides. Because agricultural work is seasonal, workers are not always in contact with pesticides the entire year. Therefore, the amount of working years was corrected by estimating the number of days actually worked using pesticides during a calendar year. In addition, those "adjusted" years were amplified by an "occupational exposure factor" of three fold; based on publications that show Odds Ratio 3 to 7.9 of self -reported symptoms on workers after using pesticides in relation to controls in similar settings [42,43]. The equation that relates these parameters is called Days of Adjusted Life Long Occupational Exposure to Pesticides (ALLOEP) and is described as follows: $$ \mathrm{ALLOEP} = \left(\left(\mathrm{years}\ \mathrm{living}\ \mathrm{in}\ \mathrm{agricultural}\ \mathrm{area}\ *\ 365\right) + \left(\mathrm{n}{}^{\circ}\ \mathrm{years}\ *\ \mathrm{n}{}^{\circ}\ \mathrm{days}/\mathrm{year},\ \mathrm{working}/\mathrm{contact}\ \mathrm{pesticides}*\ 3\right)\right)\ /\ 365 $$ Of course, several factors could affect the absorption of pesticides in workers and therefore exposure level would be moderated. Information about hazards, proper handling, use of protective equipment and safety measures are proven to be effective in reducing exposure in percentages from 2% up to 77%, depending on the protective equipment used [44]. The possibility of including a moderation factor to the exposure variable ALLOEP will be explored [44-47]. Refer to Table 2 to for details about variable description and factors utilized to build indicators measuring exposure. Table 2 Variable description and factors utilized to build indicators measuring exposure Reporting participants and feedback Information is being distributed to stakeholders about the progress of the project on a regular basis. Individual lab results on enzymes activity and neuropsychological evaluation will be given to participants at the end of the project by the Project Director as a summary, instead of informing each individual test result from the large battery of tests. Epidemiologic data analysis and statistical analysis Statistical analysis will be done in SPSS. Several steps will be followed for the data analysis, given the different specific objectives of the study. To investigate the trends of the three biomarkers in each of the population groups: simple descriptions of the enzymatic activity with the support of scattering graphs are being done for each of the biomarkers in every population group (occupationally exposed, internal control group and external control), and for each one of the fumigation periods considered. ANOVA tests are being utilized to detect significant differences in the average of enzymatic activity between groups and between fumigation periods among exposed (occupational and environmental) groups. Differences on enzymatic activities within each individual will be assessed when volunteers have more than one measurement. To assess risk of neuropsychological weakening in the three population groups: according to standard test scores, "normal" and "under normal" individuals were selected. Taking the external control group as a baseline to compare the performances of neuropsychological evaluation, Odds Ratio (OR) are being calculated for each of the exposed population groups (environmental and occupational exposure) and for the different fumigation seasons. In order to avoid confounding affected by age and level of education, OR are stratified by age, level of education and alcohol consumption. Chi square and Confidence Interval are being calculated for each OR in order to detect significance and power. Mantel and Heanzel correction will be used when necessary. To correlate enzymatic activity of the three biomarkers with level of exposure: to reach this objective with the three population groups and different fumigation periods, we are using synthesis for exposure level in a single variable (ALLOEP). To correlate enzymatic activity of the three biomarkers with neuropsychological performance: in the three population groups and different fumigation periods, test scores are being correlated with enzymatic activity for the three biomarkers. To assess the performance of the activity of ACPH as a diagnostic test of prolonged exposure to OPP: ROC curves are being developed utilizing performance result in the battery of tests as gold standard. The case definition criteria extracted from the performance in the battery of tests are taken from the scale of each test given by the test provider. The entire population of participants evaluated in baseline condition (pre-fumigation) were selected as "cases" and "controls" according to the "case definition criteria"; the "cases" being people with poor test performance and the "controls" being those people with normal test performance. The contribution of enzymes or any other variables in predicting a case are being assessed using logistic regression model. Inclusion of other data mining techniques for exploring predictive models is currently being explored (neuronal networks and decision tree). One of the strengths of this study is the assessment of more than one control group for occupational exposure, which includes the possibility of evaluating effects of environmental and occupational exposures. Inclusion and exclusion criteria improve design by avoiding selection bias and confusion (age; urban or rural social context; no neuropsychological illness, trauma and/or medication; right-handedness; no known pesticide intoxication). Additionally, all outcomes are being measured on baseline and during fumigation period, permitting the assessment of changes on enzymatic activity and neuropsychological effects of pesticides among individuals, populations, and fumigation periods. In the exposed group it is possible to assess whether biomarkers return to baseline within two to three months after cessation of exposure. Regarding the neuropsychological outcome, effects of aging will be avoided by selecting people from 18 to 50 years old; several tests will be performed and several cognitive areas explored; and we will increase the specificity of the diagnostic tool, according to suggestions described in literature [48]. Keep in mind that neuropsychological evaluation will be used as gold standard for diagnosis of cognitive impairment. An acknowledged weakness of the study is that no result may be related to a specific chemical compound because it is not possible to identify the specific pesticides being used by the workers or determine their metabolites in either biological or environmental specimens. According to the literature [49,50], we assume that organophosphate and carbamates are the most used pesticides in the region, according to type of crop (grapes and citrus fruits) and the season of the evaluation. The study did not evaluate retired workers because the aim was to assess biomarker in workers, and not in performing consequences or causality of exposure; nevertheless, this could be a complementary result. This study design was implemented during the 2011–2013 sample collection, neuropsychological evaluation and data collection process. So far, seasonal recruitment of participants has been a success given the difficulties found in the work place, which have been sorted-out during subsequent years by our experience gained and contacts made. The support of the municipalities and farmers' associations has been important to avoid problems in the field. Based on our experience over these past few years, mobile laboratory use has been a success, the application of the neuropsychological evaluation of the tests battery by a single professional has been important in order to avoid bias and quality control of the laboratory procedures has been adequate as well. Plasma cholinesterase AChE: ACPH: Acylpeptide-hydrolase BuChE: Butyrylcholinesterase DFP: Diisopropyl fluorophosphates EDTA: Type of anticoagulant vacutainers Phosphate buffer saline DTNB: 5.5′-dithio-bis-2-nitrobenzoic acid DTT: Dithiothreitol AANA: N-acetyl-L-alanine p-nitroaniline DMSO: Dymethylsulfoxide ALLOEP: Days of Adjusted Life Long Occupational Exposure to Pesticides ANOVA: Varianza analysis ROC curves: Receiver Operating Characteristic curves Steenland K, Wesseling C, Román N, Quirós I, Juncos JL. Occupational pesticide exposure and screening tests for neurodegenerative disease among an elderly population in Costa Rica. Environ Res. 2013;120:96–101. Available from: http://www.ncbi.nlm.nih.gov/pubmed/23092715. Demers P, Rosenstock L. Occupational injuries and illnesses among Washington State agricultural workers. Am J Public Health. 1991;81(12):1656–8. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1405289&tool=pmcentrez&rendertype=abstract. Cavieres FMF. Exposición a pesticidas y toxicidad reproductiva y del desarrollo en humanos: Análisis de la evidencia epidemiológica y experimental. Rev Med Chil. 2004;132(7):873–9. Nieuwenhuijsen MJ, Dadvand P, Grellier J, Martinez D, Vrijheid M. Environmental risk factors of pregnancy outcomes: a summary of recent meta-analyses of epidemiological studies. Environ Health. 2013;12(1):6. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3582445&tool=pmcentrez&rendertype=abstract. Roldán-Tapia L, Parrón T, Sánchez-Santed F. Neuropsychological effects of long-term exposure to organophosphate pesticides. Neurotoxicol Teratol. 2005;27(2):259–66. Available from: http://www.ncbi.nlm.nih.gov/pubmed/15734277. Roldán-Tapia L, Leyva A, Laynez F, Santed FS. Chronic neuropsychological sequelae of cholinesterase inhibitors in the absence of structural brain damage: two cases of acute poisoning. Environ Health Perspect. 2005;113(6):762–6. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1257603/. Jamal GA, Hansen S, Pilkington A, Buchanan D, Gillham RA, Abdel-Azis M, et al. A clinical neurological, neurophysiological, and neuropsychological study of sheep farmers and dippers exposed to organophosphate pesticides. Occup Environ Med. 2002;59(7):434–41. Rohlman DS, Anger WK, Lein PJ. Correlating neurobehavioral performance with biomarkers of organophosphorous pesticide exposure. Neurotoxicology. 2011;32(2):268–76. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3057226/. Ross SM, McManus IC, Harrison V, Mason O. Neurobehavioral problems following low-level exposure to organophosphate pesticides: a systematic and meta-analytic review. Crit Rev Toxicol. 2013;43(1):21–44. Available from: http://informahealthcare.com/doi/abs/10.3109/10408444.2012.738645. Mackenzie Ross SJ, Brewin CR, Curran HV, Furlong CE, Abraham-Smith KM, Harrison V. Neuropsychological and psychiatric functioning in sheep farmers exposed to low levels of organophosphate pesticides. Neurotoxicol Teratol. 2010;32(4):452–9. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3042861&tool=pmcentrez&rendertype=abstract. McCauley LA, Anger WK, Keifer M, Langley R, Robson MG, Rohlman D. Studying health outcomes in farmworker populations exposed to pesticides. Environ Health Perspect. 2006;114(6):953–60. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1480483/. Pancetti F, Olmos C, Dagnino-Subiabre A, Rozas C, Morales B. Noncholinesterase effects induced by organophosphate pesticides and their relationship to cognitive processes: implication for the action of acylpeptide hydrolase. J Toxicol Environ Health B Crit Rev. 2007;10(8):623–30. Available from: http://www.ncbi.nlm.nih.gov/pubmed/18049927. Rosenblum JS, Kozarich JW. Prolyl peptidases: a serine protease subfamily with high potential for drug discovery. Curr Opin Chem Biol. 2003;7:496–504. Scaloni A, Barra D, Jones WM, Manning JM. Human Acylpeptide Hydrolase. J Biol Chem. 1994;269(21):15076–84. Polgár L. The prolyl oligopeptidase family. Cell Mol Life Sci. 2002;59(2):349–62. Senthilkumar R, Reddy PN SK. Studies on trypsin-modified bovine and human lens acylpeptide hydrolase. Exp Eye Res. 2001;72(3):301–10. Kontani K, Taguchi O, Narita T, Hiraiwa N, Sawai S, Hanaoka J, et al. Autologous dendritic cells or cells expressing both B7-1 and MUC1 can rescue tumor-specific cytotoxic T lymphocytes from MUC1-mediated apoptotic cell death. J Leukoc Biol. 2000;68(2):225–32. Palmieri G, Bergamo P, Luini A, Ruvo M, Gogliettino M, Langella E, et al. Acylpeptide hydrolase inhibition as targeted strategy to induce proteasomal down-regulation. PLoS One. 2011;6(10):e25888. Yamaguchi M, Kambayashi D, Toda J, Sano T, Toyoshima S, Hojo H. Acetylleucine chloromethyl ketone, an inhibitor of acylpeptide hydrolase, induces apoptosis of U937 cells. Biochem Biophys Res Commun. 1999;263(1):139–42. Scaloni A, Jones W, Pospischil M, Sassa S, Schneewind O, Popowicz AM, et al. Deficiency of acylpeptide hydrolase in small-cell lung carcinoma cell lines. J Lab Clin Med. 1992;120(4):546–52. Erlandsson R, Boldog F, Persson B, Zabarovsky ER, Allikmets RL, Sümegi J, et al. The gene from the short arm of chromosome 3, at D3F15S2, frequently deleted in renal cell carcinoma, encodes acylpeptide hydrolase. Oncogene. 1991;6(7):1293–5. Olmos C, Sandoval R, Rozas C, Navarro S, Wyneken U, Zeise M. Effect of short-term exposure to dichlorvos on synaptic plasticity of rat hippocampal slices: involvement of acylpeptide hydrolase and alpha(7) nicotinic receptors. Toxicol Appl Pharmacol. 2009;238:37–46. Sandoval R, Navarro S, García-Rojo G, Calderón R, Pedrero A, Sandoval S, et al. Synaptic localization of acylpeptide hydrolase in adult rat telencephalon. Neurosci Lett. 2012;520(1):98–103. Yamin R, Bagchi S, Hildebrant R, Scaloni A, Widom RLAC. Acyl peptide hydrolase, a serine proteinase isolated from conditioned medium of neuroblastoma cells, degrades the amyloid-beta peptide. J Neurochem. 2007;100(2):458–67. Yamin R, Zhao C, O'Connor PB, McKee AC, Abraham CR. Acyl peptide hydrolase degrades monomeric and oligomeric amyloid-beta peptide. Mol Neurodegener. 2009;4:33. Richards PG, Johnson MK, Ray DE. Identification of acylpeptide hydrolase as a sensitive site for reaction with organophosphorus compounds and a potential target for cognitive enhancing drugs. Mol Pharmacol. 2000;58(3):577–83. Kim JH, Stevens RC, MacCoss MJ, Goodlett DR, Scherl A, Richter RJ, et al. Identification and characterization of biomarkers of organophosphorus exposures in humans. Adv Exp Med Biol. 2010;660:61–71. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=2878371&tool=pmcentrez&rendertype=abstract. Lockridge O. Human protein data. In: Haeberli A, editor. Human protein. Weinhein, Ney York, Cambridge: VCH: VHC; 1992. Farahat TM, Abdelrasoul GM, Amr MM, Shebl MM, Farahat FM, Anger WK. Neurobehavioural effects among workers occupationally exposed to organophosphorous pesticides. Occup Environ Med. 2003;60(4):279–86. Available from: http://www.ncbi.nlm.nih.gov/pubmed/22377683. de Agricultura M. Ministerio de Agricultura. 2008. Fleiss JL. Statistical methods for rates and proportions. second. Statistical methods for rates and proportions. 2nd ed. New York: John Wiley & Sons; 1981. Pértegas Díaz S, Pita Fernández S. Calculo de tamaño muestral para estudios de casos y controles. Cad Aten Primaria. 2002;9:148–50. Ellman GL, Courtney KD, Andres V, Feather-Stone RM. A new and rapid colorimetric determination of acetylcholinesterase activity. Biochem Pharmacol. 1961;7:88–95. Available from: http://www.ncbi.nlm.nih.gov/pubmed/13726518. Smith PK, Krohn RI, Hermanson GT, Mallia AK, Gartner FH, Provenzano MD, et al. Measurement of protein using bicinchoninic acid. Anal Biochem. 1985;150(1):76–85. Available from: http://linkinghub.elsevier.com/retrieve/pii/0003269785904427. Medical A. HEMOGLOBIN PROCEDURE Intended for the Quantitative Determination of Hemoglobin in the blood [Internet]. Atlas medicial. 2014; p. 0–1. Available from: http://www.atlas-site.co.uk/index_files/website/$8.02.46.0.0500.pdf. Quintana M, Peña-Casanova J, Sánchez-Benavides G, Langohr K, Manero RM, Aguilar M, et al. Spanish multicenter normative studies_ norms for the abbreviated Barcelona Test. Arch Clin Neuropsychol. 2011;26(2):144–57. Rognonia T, Casals-Colla M, Sánchez-Benavidesa G, Quintanaa M, Manerob RM, Calvoa L, et al. Spanish normative studies in young adults (NEURONORMA young adults project): Norms for Stroop Color—Word Interference and Tower of London-Drexel University tests. Neurologia. 2013;28(2):73–80. Tamayoa F, Casals-Colla M, Sánchez-Benavidesa G, Quintanaa RMM M, Rognonia T, Calvoa L, et al. Spanish normative studies in a young adult population (NEURONORMA young adults project): Guidelines for the span verbal, span visuo-spatial, Letter-Number Sequencing, Trail Making Test and Symbol Digit Modalities Test. Neurologia. 2012;27(6):319–29. Palomoa R, Casals-Colla M, Sánchez-Benavidesa G, Quintanaa M, Manerob RM, Rognonia T, et al. Na-Casanovab. Spanish normative studies in young adults (NEURONORMA young adults project): Norms for the Rey—Osterrieth Complex Figure (copy and memory) and Free and Cued Selective Reminding Test. Neurologia. 2013;28(4):226–35. Rapport J, Farchione J, Dutra L, Webster S, Charter A. Measures of hemi-inattention on the Rey Figure Copy for the Lezak-Osterrieth Scoring Method. Clin Neuropsychol. 1996;10:450–4. Lezak MD. Neuropsychological assessment. New York: Oxford University Press; 1995. Khan K, Ismail AA, Abdel Rasoul G, Bonner MR, Lasarev MR, Hendy O, et al. Longitudinal assessment of chlorpyrifos exposure and self-reported neurological symptoms in adolescent pesticide applicators. BMJ Open. 2014;4(3):e004177. Available from: http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=3948636&tool=pmcentrez&rendertype=abstract. Pathak MK, Fareed M, Bihari V, Mathur N, Srivastava AK, Kuddus M, et al. Cholinesterase levels and morbidity in pesticide sprayers in North India. Occup Med (Lond). 2011;61(7):512–4. Available from: http://www.ncbi.nlm.nih.gov/pubmed/21685404. Keifer MC. Effectiveness of interventions in reducing pesticide overexposure and poisonings. Am J Prev Med. 2000;18(4 Suppl):80–9. Available from: http://www.ncbi.nlm.nih.gov/pubmed/10793284. Quandt SA, Hernández-Valero MA, Grzywacz JG, Hovey JD, Gonzales M, Arcury TA. Workplace, household, and personal predictors of pesticide exposure for farmworkers. Environ Health Perspect. 2006;114:943–52. Arcury TA, Quandt SA, Barr DB, Hoppin JA, McCauley L, Grzywacz JG, et al. Farmworker Exposure to pesticides: methodologic issues for the collection of comparable data. Environ Health Perspect. 2006;114(6):923–8. Available from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1480495/. Strong LL, Thompson ÃB, Koepsell TD, Meischke H. Factors associated with pesticide safety practices in farmworkers. Am J Ind Med. 2008;51:69–81. Franzen MD, Burgess EJ, Smith-Seemiller L. Methods of estimating premorbid functioning. Arch Clin Neuropsychol. 1997;12(8):711–38. Available from: http://www.ncbi.nlm.nih.gov/pubmed/14590649. Pancetti Floria CMRM. Epidemiology of organophosphate and carbamate intoxications in Chile. In: Tetsuo Satoh RCG, editor. Anticholinesterase Pesticides: Metabolism, Neurotoxicity, and Epidemiology. ISBN: 978-0-470-41030-1; 2010 Fiedler N, Kipen H, Kelly-McNeil K, Fenske R. Long-term use of organophosphates and neuropsychological performance. Am J Ind Med. 1997;32(5):487–96. Available from: http://www.ncbi.nlm.nih.gov/pubmed/9327072. The authors would like to thank all the team participating in this project: the clinical officers and drivers in the field, the laboratory technicians who processed the samples, auditory therapist who carried out the neuro-psychological evaluations in the field, the administrators and financial support. The project Fondef D09I1057 was founded by FONDEF Fondo de Fomento al Desarrollo Científico y Tecnológico, CONICYT Chile (Fund for the promotion of scientific and technological development, National Commission for Scientific and Technological Research, Chile). Thanks to CONICYT-PCHA, Doctorado Nacional, 2014–21141115; that has given a grant to Sebastian Corral for attending a PhD program in Psychology. Thanks to Ann Davenport for improving the language and editing the manuscript. The project received full ethical approval from the Ethic and Scientific Committee of the Faculty of Medicine, Universidad Católica del Norte (dated August 25, 2014). Department of Public Health, Faculty of Medicine, Universidad Católica del Norte, Calle Larrondo 1281, Postal Code 1780000, Coquimbo, Chile Muriel Ramírez-Santana Department of Biomedical Sciences, Faculty of Medicine, Universidad Católica del Norte, Calle Larrondo 1281, Postal Code 1780000, Coquimbo, Chile Liliana Zúñiga, Sebastián Corral, Rodrigo Sandoval & Floria Pancetti Psychology Department, FACSO, Universidad de Chile, Santiago, Chile Sebastián Corral Department for Health Evidence, Radboud Institute for Health Sciences, Radboud university medical center, Geert Grooteplein-Zuid 10, 6525, GA, Nijmegen, The Netherlands Paul TJ Scheepers & Nel Roeleveld Department of Primary and Community Care, Radboud Institute for Health Sciences, Radboud university medical center, Geert Grooteplein-Zuid 10, 6525, GA, Nijmegen, The Netherlands Koos Van der Velden Department of Pedatrics, Radboudumc Amalia Children's Hospital, Radboud university medical center, Geert Grooteplein-Zuid 10, 6525, GA, Nijmegen, The Netherlands Nel Roeleveld Liliana Zúñiga Rodrigo Sandoval Paul TJ Scheepers Floria Pancetti Correspondence to Muriel Ramírez-Santana or Floria Pancetti. FP is the project leader and principal investigator of this study, responsible for the technical and laboratory work. MR is responsible for the design of the research protocol, coordinated involvement of stakeholders of the project, is doing overall analysis of data and drafted this manuscript. RS and LZ contributed to the development of the study protocol, questionnaire design, field work, laboratory and logistic support while collecting samples. SC is responsible for the design of the test battery and overall neuropsychological evaluation of the participants. NR, KV and PS have contributed to improve of the study protocol and have oriented the data analysis. All authors provided comments on the draft and have read and approved the final version of it. This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Ramírez-Santana, M., Zúñiga, L., Corral, S. et al. Assessing biomarkers and neuropsychological outcomes in rural populations exposed to organophosphate pesticides in Chile – study design and protocol. BMC Public Health 15, 116 (2015). https://doi.org/10.1186/s12889-015-1463-5 Received: 21 January 2015 Chronic exposure Submission enquiries: [email protected]
CommonCrawl
Search E-alert Submit My Account Login Article | Open | Published: 21 May 2018 Massively Parallel Coincidence Counting of High-Dimensional Entangled States Matthew Reichert ORCID: orcid.org/0000-0003-1701-93761, Hugo Defienne1 & Jason W. Fleischer1 Scientific Reportsvolume 8, Article number: 7925 (2018) | Download Citation Single photons and quantum effects Entangled states of light are essential for quantum technologies and fundamental tests of physics. Current systems rely on entanglement in 2D degrees of freedom, e.g., polarization states. Increasing the dimensionality provides exponential speed-up of quantum computation, enhances the channel capacity and security of quantum communication protocols, and enables quantum imaging; unfortunately, characterizing high-dimensional entanglement of even bipartite quantum states remains prohibitively time-consuming. Here, we develop and experimentally demonstrate a new theory of camera detection that leverages the massive parallelization inherent in an array of pixels. We show that a megapixel array, for example, can measure a joint Hilbert space of 1012 dimensions, with a speed-up of nearly four orders-of-magnitude over traditional methods. The technique uses standard geometry with existing technology, thus removing barriers of entry to quantum imaging experiments, generalizes readily to arbitrary numbers of entangled photons, and opens previously inaccessible regimes of high-dimensional quantum optics. Broad beams of quantum light are a natural pathway to large Hilbert spaces1,2,3,4,5,6,7,8,9,10,11, as they have high-dimensional entanglement in transverse spatial modes12. Spatial correlation of biphotons has led to sub-shot-noise quantum imaging13,14, enhanced resolution15, quantum ghost imaging16, and proposals for quantum lithography17. Despite this work, high-dimensional quantum optics remains underdeveloped, largely due to difficulty in measuring the full joint probability distribution. Traditionally, experiments measure coincidences between two single-photon counting modules (SPCMs) that are each scanned over their own subspace to build up a measurement point-by-point. Such a procedure is photon-inefficient, making high-dimensional measurements tedious and prohibitively time consuming. Full quantum-state measurements are impractical even for a relatively small number of dimensions18,19. In this work, we present a rapid and efficient method of measuring a high-dimensional biphoton joint probability distribution via massively parallel coincidence counting. We use a single-photon-sensitive electron-multiplying (EM) CCD camera as a dense array of photon detectors to measure all dimensions of the joint Hilbert space simultaneously. For example, a typical megapixel camera can record a one trillion-dimensional joint Hilbert space nearly 10,000× faster than traditional raster-scanning methods. This speed-up enables observation of high-dimensional features that cannot be seen when only low-dimensional measurements (projections) are made. Recent efforts with single-photon-sensitive cameras have characterized spatial entanglement20,21,22,23,24,25, but results relied on projection onto only two dimensions and considered only homogeneous distributions, limiting measurements to EPR-type entanglement. In these works, measurements were spatially averaged over the entire plane (losing local detail). Further, to mitigate complications of accidental counts, coincidence measurements were performed in the low-count-rate regime. Here, we show that this assumption is unnecessary and give a general expression for the biphoton joint probability distribution. The exact expression follows from measurements of single- and coincidence-count probabilities observed between every pair of pixels over the entire frame simultaneously. The resulting distribution is valid for arbitrary count rates up to detector saturation, enabling more accurate measurements, faster acquisition speeds, and optimization of the signal-to-noise ratio. To demonstrate our method, we characterize the properties of photon pairs entangled in transverse spatial degrees of freedom. A pure entangled photon state is described by the biphoton wave function \(\psi ({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})\), where \({{\boldsymbol{\rho }}}_{i}={x}_{i}{\hat{{\bf{x}}}}_{i}+{y}_{i}{\hat{{\bf{y}}}}_{i}\), and likewise for \({{\boldsymbol{\rho }}}_{j}\). The joint probability of observing one photon at \({{\boldsymbol{\rho }}}_{i}\) and its partner at \({{\boldsymbol{\rho }}}_{j}\) is \({\rm{\Gamma }}({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})={|\psi ({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})|}^{2}\), which in a discretized basis is \({{\rm{\Gamma }}}_{ij}\). Since each photon may be found in a 2D space (x i , y i ), the joint probability distribution is a 4D distribution. Like classical light-field methods26, observation of the full 4D distribution shows details and features that would be lost with conventional projection methods. While we focus on spatial components, we emphasize that our technique may be readily extended to other degrees of freedom, such as spectral modes or orbital angular momentum, by suitable mapping onto the pixels of the camera. A schematic of the measurement and processing procedure is shown in Fig. 1. Spatially entangled photon pairs are generated via spontaneous parametric down-conversion (SPDC) in a β-barium borate (BBO) crystal, cut for type-I phase matching. The spatial entanglement structure has been extensively studied12,13,15,21,24,25,27,28,29,30,31,32,33, and we use it here for a clear experimental demonstration of high-dimensional characteristics of entangled photons. The crystal is pumped by a 120 mW, 400 nm cw laser diode that is spatially filtered and collimated (not shown). Spectral filters block the pump beam and select near-degenerate photon pairs at 800 nm (a large bandwidth of 40 nm (FWHM), gives rise to the relatively thick rings in the far field32,33). These are placed immediately after the BBO crystal to prevent induced fluorescence in the subsequent optics. A lens images the far field of the crystal onto an EMCCD camera (Andor iXon Ultra). Measuring the biphoton joint probability distribution with an EMCCD camera. (a) Experimental setup for measuring far-field type-I SPDC. (b–e) Flow chart of data processing. (b) The camera acquires many thresholded frames from which we calculate both (c) the average of all frames 〈\({C}_{i}\)〉 (indicated by 〈·〉) and (d) the average of the tensor product of each frame with itself 〈\({C}_{ij}\)〉 (⊗, Eq. (2)) (shown here for \(j=[{x}_{j}=70,\,{y}_{j}=33]\), indicated by the blue ×). Most coincidences are accidentals between photons from different pairs, yielding the apparent similarity between (c) and (d). Genuine coincidences from anticorrelated entangled photons appearing within the boxed region give a difference between the two (see insets). (e) The conditional probability distribution, via Eq. (4), shows anti-correlation of paired photons localized about \(i\) = [−70, −32]. Measurement of the biphoton joint probability distribution \({{\rm{\Gamma }}}_{ij}\) is possible with an EMCCD camera due to its high quantum efficiency and low noise floor. The camera is operated in the photon-counting regime, where each pixel is set to one if its gray-level output is above a threshold and zero otherwise34 (see Methods). The data consist of a set of \(N\) frames \({C}_{i,n}\) = {0, 1}, where subscript \(i\) is the pixel index (spatial mode) and \(n\) is the frame number. Each frame consists of many counts from both photon events and electronic noise (mainly due to clock-induced charge34). The singles-count probability is $$\langle {C}_{i}\rangle =\sum _{m}{P}_{m}({\mu }_{i|m}+{p}_{el}{\mu }_{\bar{i}|m}),$$ where \({P}_{m}\) is the distribution of the number \(m\) of photon pairs and \({p}_{el}\) is the electronic count probability (e.g., dark counts). The factors \({\mu }_{i|m}\) and \({\mu }_{\bar{i}|m}\) represent the conditional probabilities of detecting at least one photon and zero photons, respectively, given \(m\) pairs arriving within the detector time window (see Table 1)35. Table 1 Probabilities of single detection \({\mu }_{p|m}\) and coincidence \({\mu }_{pq|m}\) conditioned on the number of photon pairs m. Since the duration of both the exposure and read-out of each frame of the EMCCD is much longer than the biphoton correlation time, photons from each pair arrive at the camera within a single frame. The coincidence count probability between all pixels i and j is measured by the average of the tensor product of each frame with itself: $$\langle {C}_{ij}\rangle =\frac{1}{N}\sum _{n=1}^{N}{C}_{i,n}{C}_{j,n}.$$ In addition to genuine coincidence counts from entangled photon pairs, there are also accidental counts from uncorrelated photons and noise. These can be accounted for in general by the expression $$\langle {C}_{ij}\rangle =\sum _{m}{P}_{m}({\mu }_{ij|m}+{p}_{el}({\mu }_{i\bar{j}|m}+{\mu }_{\bar{i}j|m})+{p}_{el}^{2}{\mu }_{\bar{i}\bar{j}|m}),$$ Where each of the terms \({\mu }_{pq|m}\) are related to \({{\rm{\Gamma }}}_{pq}\) and its marginal (see Table 1). The terms in Eq. (3) are coincidences between (1) at least two photons, (2) at least one photon and one electronic noise event, and (3) two noise events. For a Poissonian distribution of pairs, Eq. (3) simplifies, giving an analytic expression for \(\langle {C}_{ij}\rangle \) in terms of \(\langle {C}_{i}\rangle \), \(\langle {C}_{j}\rangle \), and \({{\rm{\Gamma }}}_{ij}\). With Eq. (1) this yields $${{\rm{\Gamma }}}_{ij}=\alpha \,{\rm{l}}{\rm{n}}(1+\frac{\langle {C}_{ij}\rangle -\langle {C}_{i}\rangle \langle {C}_{j}\rangle }{(1-\langle {C}_{i}\rangle )(1-\langle {C}_{j}\rangle )})$$ where α is a constant that depends on the quantum efficiency of the system (see Supplementary Information). Equation (4) includes the case when several photons arrive at the same pixel. This case has been excluded explicitly by other treatments21,22,35, even though collinear geometry and high spatial entanglement make this case the most likely one. The paradox is often circumvented by considering the low-photon-count limit, in which the joint probability distribution \({{\rm{\Gamma }}}_{ij}\) becomes proportional to the measured coincidence count rate \(\langle {C}_{ij}\rangle \). However, this assumption is not necessary here; indeed, Eq. (4) remains valid up to detector saturation. The formalism thus covers the entire range of photon intensities and types of detection events, and generalizes straightforwardly to joint distributions of higher numbers of entangled photons. Figure 1d shows the coincidence count distribution for a particular pixel \(j\) = [\({x}_{j}\) = 70, \({y}_{j}\) = 33], i.e., a 2D slice for all \(i\) = {\({x}_{i}\), \({y}_{i}\)} through the 4D joint distribution \(\langle {C}_{ij}\rangle \). It includes genuine coincidences as well as a large background from accidental counts. Due to the large number of pairs in each frame (~104), most coincidences are accidentals between photons from different pairs; indeed, Fig. 1d appears very similar to the singles count distribution \(\langle {C}_{i}\rangle \) in Fig. 1c. Genuine coincidences between photons from the same pair, shown in the inset, rise above the background from accidentals. The corresponding 2D slice through the 4D \({{\rm{\Gamma }}}_{ij}\), calculated via Eq. (4), is displayed in Fig. 1e. When one photon is found at \(j\) = [70, 33], its entangled partner is localized near \(i\) = [−70, −32], indicating a high degree of anti-correlation. Such conditional distributions \({{\rm{\Gamma }}}_{i|j}\) are measured simultaneously for all \(j\), thus constituting a full measurement of the 4D biphoton joint probability distribution. Complete measurements of high-dimensional joint Hilbert spaces contain detailed, localized information not available in lower-dimensional projections. To demonstrate this, we show \({{\rm{\Gamma }}}_{i|j}\) for entangled photons detected at different radial distances \(j\) = [x j , y j ] from the center of the beam (Fig. 2a–c). There are two main observations: 1) as x j increases, x i decreases, and 2) the width along the radial directions increases. The former is necessary to maintain a fixed sum, i.e., \({x}_{i}\) + \({x}_{j}\) ≈ 0, while the latter arises from the radial dependence of the uncertainty in the wave vector k, \({\rm{\Delta }}{k}_{\rho }\approx {k}_{\rho }|{\rm{\Delta }}{\bf{k}}|/|{\bf{k}}|\). This effect comes from the rather large spectral bandwidth of the filter (40 nm), as different frequencies are phase-matched at different radial momenta \({k}_{\rho }\)27,33. Observation of such features with traditional raster-scanning techniques requires multiple separate measurements. With an EMCCD camera, they are all captured simultaneously in a single image. Information contained in the full 4D measurement of biphoton joint probability distribution. (a–c) Variation of \({{\rm{\Gamma }}}_{i|j}\) at different distances from the center, indicated by blue ×, showing anti-correlation of width that increases with |x| (see insets). (d) Projection of \({{\rm{\Gamma }}}_{ij}\) onto sum coordinates averages the variations in (a–c). (e–g) 2D slices of \({{\rm{\Gamma }}}_{ij}\) for fixed \([{x}_{i},\,{x}_{j}]\) (indicated by blue dashed lines in inset of 〈\({C}_{i}\)〉) showing variation in anti-correlation with horizontal separation. (h) Projection of \({{\rm{\Gamma }}}_{ij}\) onto \([{y}_{i},\,{y}_{j}]\) (integration over \({x}_{i}\) and \({x}_{j}\)) averages the structures in (e–g), giving only a mean profile. In previous studies, the intercorrelation function was measured via image correlation techniques21,22, without measuring the full 4D \({{\rm{\Gamma }}}_{ij}\). However, such measurements provide only the globally averaged correlation and thus neglect any potential internal variation in the joint probability distribution. Here, we calculate the intercorrelation function by projecting \({{\rm{\Gamma }}}_{ij}\) onto the sum coordinate \([({x}_{i}+{x}_{j})/\sqrt{2},({y}_{i}+{y}_{j})/\sqrt{2}]\) (Fig. 2d). The peak near the center indicates that entangled photon pairs are always found near equal and opposite sides of the center, within anti-correlation widths \({\sigma }_{x,+}\) = 20.9 ± 0.3 μm and \({\sigma }_{y,+}\) = 18.6 ± 0.3 μm. Our more-resolved methods show that, even in this simple case, the corresponding widths of the \({{\rm{\Gamma }}}_{i|j}\) in Fig. 2a–c vary significantly, with \({\sigma }_{x}\) = 16.1 ± 1.4 μm, 23.0 ± 1.5 μm, and 34.9 ± 2.5 μm, respectively. Other slices of \({{\rm{\Gamma }}}_{ij}\), along different coordinates, contain different information about the entangled photon pairs. For example, we examine correlations in vertical position within specific columns of the image by fixing [x i , xj]. While some variation can survive averaging by projection onto 2D planes, such as phase-matching and spatial walk-off effects (as observed in type II SPDC28), in contrast our method is capable of measuring arbitrary 4D joint probability distributions. Examples in Fig. 2e–g show strong vertical anti-correlation that changes depending on the horizontal separation of the selected columns (indicated in the insets), with radial variation that diminishes for larger \(|x|\). As before, projecting \({{\rm{\Gamma }}}_{ij}\) averages this variation (Fig. 2h), resulting in lost information22,23,28. The massively parallel capability of EMCCD cameras allows for much faster measurement of \({{\rm{\Gamma }}}_{ij}\) than traditional scanning techniques. Raster-scanning pairs of SPCMs, each in a \(d\)-dimensional plane, requires \({d}^{2}\) measurements to build a complete measurement. In contrast, an EMCCD measures the entire plane at once, with pixels at each point in the array. While SPCMs have a high effective frame rate (10s of MHz), the frame rate of an EMCCD camera is limited by the readout process (which scales as \(\sqrt{d}\,\)36). Data shown in Figs. 1 and 2 were taken from a subset of 251 × 251 pixels, corresponding to a four-billion-dimensional joint Hilbert space, and were acquired in a matter of hours. A megapixel EMCCD can record a (1024 × 1024)2 ≈ one trillion dimensional joint Hilbert space with a signal-to-noise ratio of 10 in approximately 11 hours. The same measurement performed with raster-scanning SPCMs is estimated to take 9 years, giving a camera improvement of ~7000×. The EMCCD camera also outperforms compressive sensing methods29 for large joint Hilbert spaces and does not require sparsity or numerical retrieval. Camera-based methods hold clear advantages for quantum imaging applications. Imaging with perfectly correlated photon pairs—with biphoton wave function \(\psi ({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})=\delta ({{\boldsymbol{\rho }}}_{i}-{{\boldsymbol{\rho }}}_{j})\)—gives a probability distribution of both photons at the same position in the image plane $${\rm{\Gamma }}({\boldsymbol{\rho }},{\boldsymbol{\rho }})\propto {|{\int }^{}{t}^{2}({\boldsymbol{\rho }}{\boldsymbol{^{\prime} }}){h}^{2}({\boldsymbol{\rho }}-{\boldsymbol{\rho }}{\boldsymbol{^{\prime} }}){\rm{d}}{\boldsymbol{\rho }}{\boldsymbol{^{\prime} }}|}^{2}$$ where \(t({\boldsymbol{\rho }})\) is the object transmittance and \(h({\boldsymbol{\rho }})\) is the point spread function. The fact that the square of \(h({\boldsymbol{\rho }})\) appears in Eq. (5) means that biphoton imaging has higher resolution than classical coherent imaging [though it has the same resolution as classical incoherent light (of the same coherence area)17,30,31,37,38]. To demonstrate this, we image a resolution chart using spatially entangled biphoton illumination—where one photon is localized near its partner (\(i\) ≈ \(j\))—by projecting the output facet of the nonlinear crystal onto the object, which is then imaged onto the camera (see Fig. 3a, Methods). To ensure the validity of Eq. (5), we measure the incident \({{\rm{\Gamma }}}_{ij}\) without the object; the results confirm strong spatial correlation, visible in both the conditional distributions (Fig. 3b,c) and the projection onto the difference coordinates (Fig. 3d). By fitting to a Gaussian distribution, we find the correlation width \({\sigma }_{-}\) = 8.5 ± 0.5 μm. Measurements are then repeated with the object; a 3D projection of \({{\rm{\Gamma }}}_{ij}\), shown in Fig. 3e, displays the image of the resolution chart, its appropriate basis (diagonal plane), and the final spatial correlation distribution of the biphotons (thickness of the diagonal plane). Furthermore, coincidence images taken with entangled photon pairs (Fig. 3f) show nearly identical resolution as incoherent light17,30,31—as measured by direct imaging (singles counts) of photon pair illumination—(Fig. 3g), and clear improvement in resolution over those with an 808 nm laser diode (Fig. 3h), with less noise and higher visibility. For example, the bars within the red boxed region (group 4, element 6) are clearly resolved with entangled photon pairs (\({{\rm{\Gamma }}}_{ii}\), visibility of 0.33 ± 0.03) and incoherent light (\({{\rm{\Gamma }}}_{i}\), visibility of 0.37 ± 0.03), but not with classical coherent light (visibility < 0.04). Ideally, the visibility for entangled photon pairs and incoherent light should be the same; the discrepancy here may be due to the way we approximate \({{\rm{\Gamma }}}_{ii}\) with \({{\rm{\Gamma }}}_{i,i+1}\,\)using adjacent pixels (see Methods). Biphoton imaging of a USAF resolution chart with an EMCCD camera (a) Experimental setup for imaging with the near-field of the biphoton distribution. (b–d) Measurements of incident \({{\rm{\Gamma }}}_{ij}\) (without the object), showing (b) \({{\rm{\Gamma }}}_{i|j}\) for \(j=[{x}_{i}=50\,\mathrm{\mu m},{y}_{i}=-40\,\mathrm{\mu m}]\), (c) 2D slice of \({{\rm{\Gamma }}}_{ij}\) for fixed \([{x}_{i},\,{x}_{j}]\), and (d) projection onto the difference coordinates. Each shows a high degree of spatial correlation. Black region \({x}_{j}={x}_{i}\) in (b,d) results from zeroing to eliminate the artifact from charge transfer inefficiency (see Methods and Supplementary Information). (e) 3D projection of \({{\rm{\Gamma }}}_{ij}\) onto \(({x}_{i},\,{y}_{i},\,{y}_{j})\), shows both the image of the resolution chart and spatial correlation of the entangled photons. (f–h) Comparison of imaging (f) Γ ij and (g) Γ i (via singles counts) of entangled photon pairs at 800 nm and (h) classical coherent light at 808 nm. Red boxed highlights enhanced in visibility of group 4, element 6. By using readily available technology and standard imaging geometries, our method removes barriers of entry to experiments in quantum optics. Time-resolved measurements of coincidence counts are replaced by time-averaged camera measurements of photon correlations, while lower-order counts and conditional probabilities are bootstrapped to provide complete characterization of joint distribution functions. Further, the massive parallelization inherent in megapixel cameras enables measurement of states with orders-of-magnitude greater dimensionality than previously possible, with similar increases in acquisition speed. With suitable mapping for other degrees of freedom, e.g., dispersive elements for spectral modes or diffractive elements for orbital angular momentum, other types of quantum states can be characterized as well (including multiphoton quantum states via n-fold coincidences). Our results thus extend conventional imaging to the quantum domain, providing a pathway for quantum phase retrieval and coherence/entanglement control, and enable new means of quantum information processing with high-dimensional entangled states. The EMCCD (iXon Ultra 897, Andor) is a highly sensitive camera in which an avalanche gain of up to 1000 amplifies the signal in each pixel before readout. The camera has a pixel size of 16 × 16 μm2 with a quantum efficiency of ~70% at 800 nm. To minimize the dark-count rate compared to other noise sources in the camera, it is operated at a temperature of −85 °C. The camera is first characterized by measuring the histogram of the gray scale output of each pixel from many (~106) frames taken with the shutter closed. The histogram is primarily Gaussian, due to read noise, with an additional exponential tail towards high gray levels due primarily to clock-induced charge (CIC) noise34. We fit the histogram with a Gaussian distribution to find the center (~170) and standard deviation \(\sigma \) (4 to 20, depending on the readout rate). We have found that a threshold set to 2σ above the mean maximizes the signal-to-noise ratio. A pixel-dependent threshold is used to account for a minor inhomogeneity across the frame. There is a small cross-talk effect between pixels in a single column due to sub-optimal charge transfer efficiency upon readout (see Supplementary Information). For this reason, within each 2D frame of \({{\rm{\Gamma }}}_{i|j}\), we set to zero the 10 pixels above and below \(i\) = \(j\). Operating at higher readout rate increases noise from readout and CIC, but we have found that the increased acquisition speed more than compensates, yielding a higher signal-to-noise ratio (SNR) for the same total acquisition time. The camera is therefore operated at the fastest available settings: a horizontal readout rate of 17 MHz and a vertical shift time of 0.3 μs, with a vertical clock voltage of +4 V over factory default. The pump laser power and camera exposure time are set to give an optimum peak count probability \(\langle C\rangle \) of ~0.234. We acquire a number of frames sufficient to achieve the desired SNR. Typically, a series of ~105–107 images are acquired at a ~1–5 ms exposure time. Many sets of thresholded frames are saved to disk, where each set contains 104 frames as a logical array \({C}_{i,n}\). Each column of the array represents a single frame, and each row represents a pixel. Equation (2) is used to calculate \(\langle {C}_{ij}\rangle \) by matrix multiplication of each set of frames, which are then averaged. To minimize non-ergodic effects, the term \(\langle {C}_{i}\rangle \langle {C}_{j}\rangle \) in Eq. (4) is calculated via matrix multiplication of successive frames (see Supplementary Information). Elsewhere, \(\langle {C}_{i}\rangle \) is the average of all frames. In general, the biphoton wave function in an image plane is given by $${\psi }_{img}({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})={\iint }^{}h({{\boldsymbol{\rho }}}_{i}-{{\boldsymbol{\rho }}^{\prime} }_{i})h({{\boldsymbol{\rho }}}_{j}-{{\boldsymbol{\rho }}^{\prime} }_{j})\cdot t({{\boldsymbol{\rho }}^{\prime} }_{i})t({{\boldsymbol{\rho }}^{\prime} }_{j}){\psi }_{s}({{\boldsymbol{\rho }}^{\prime} }_{i},{{\boldsymbol{\rho }}^{\prime} }_{j})d{{\boldsymbol{\rho }}^{\prime} }_{i}{\rm{d}}{{\boldsymbol{\rho }}^{\prime} }_{j}$$ where \({\psi }_{s}({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})\) is the wave function incident on the object. With ideally correlated photon pairs, i.e., \({\psi }_{s}({{\boldsymbol{\rho }}}_{i},{{\boldsymbol{\rho }}}_{j})=\delta ({{\boldsymbol{\rho }}}_{i}-{{\boldsymbol{\rho }}}_{j})\), the square amplitude of Eq. (6) simplifies to Eq. (5). The high-resolution biphoton image therefore lies within \({{\rm{\Gamma }}}_{ii}\), where both entangled photons hit the same pixel. However, as EMCCDs are not photon-number-resolving, they cannot distinguish between one or both photons hitting the same pixel. Instead, we approximate \({{\rm{\Gamma }}}_{ii}\) by the case where the two entangled photons arrive in adjacent pixels, i.e., \({{\rm{\Gamma }}}_{i,i+1}\), as we do in Fig. 3f. This assumption is valid when the biphoton correlation width and image features are both larger than the pixel size. For ideal imaging (\(h({\boldsymbol{\rho }})\) ≈ \(\delta ({\boldsymbol{\rho }})\)), intensity images are directly proportional to \({|t({\boldsymbol{\rho }})|}^{2}\), where \(t({\boldsymbol{\rho }})\) is the complex (field) function for transmission. For entangled-photon images,\(\,{\rm{\Gamma }}({\boldsymbol{\rho }},{\boldsymbol{\rho }})\) ∝ \({|t({\boldsymbol{\rho }})|}^{4}\) (see Eq. (5)). Therefore, we show in Fig. 3f the the square root of the biphoton images, which is proportional to \({|t({\boldsymbol{\rho }})|}^{2}\), to allow fair comparison to intensity measurements in Fig. 3g,h. This also explains the relative "flatness" of Fig. 3f compared to 3g (which are both computed from the same set of image frames). O'Brien, J. L. Optical Quantum Computing. Science 318, 1567–1570 (2007). Gisin, N. & Thew, R. Quantum communication. Nature Photon. 1, 165–171 (2007). Aspuru-Guzik, A. & Walther, P. Photonic quantum simulators. Nat. Phys. 8, 285–291 (2012). Bouwmeester, D. et al. Experimental quantum teleportation. Nature 390, 575–579 (1997). Jozsa, R. & Linden, N. On the role of entanglement in quantum-computational speed-up. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 459, 2011–2032 (2003). Vidal, G. Efficient Classical Simulation of Slightly Entangled Quantum Computations. Physical Review Letters 91, 147902 (2003). Wang, C., Deng, F.-G., Li, Y.-S., Liu, X.-S. & Long, G. L. Quantum secure direct communication with high-dimension quantum superdense coding. Physical Review A 71, 044305 (2005). Huber, M. & Pawłowski, M. Weak randomness in device-independent quantum key distribution and the advantage of using high-dimensional entanglement. Physical Review A 88, 032309 (2013). Lloyd, S. Enhanced Sensitivity of Photodetection via Quantum Illumination. Science 321, 1463–1465 (2008). Mohammad, M. et al. High-dimensional quantum cryptography with twisted light. New Journal of Physics 17, 033033 (2015). Bechmann-Pasquinucci, H. & Tittel, W. Quantum cryptography using larger alphabets. Physical Review A 61, 062308 (2000). Howell, J. C., Bennink, R. S., Bentley, S. J. & Boyd, R. W. Realization of the Einstein-Podolsky-Rosen Paradox Using Momentum- and Position-Entangled Photons from Spontaneous Parametric Down Conversion. Physical Review Letters 92, 210403 (2004). Brida, G., Genovese, M. & Ruo Berchera, I. Experimental realization of sub-shot-noise quantum imaging. Nature Photon. 4, 227–230 (2010). Ono, T., Okamoto, R. & Takeuchi, S. An entanglement-enhanced microscope. Nat. Commun. 4, 2426 (2013). Walborn, S. P., Monken, C. H., Pádua, S. & Souto Ribeiro, P. H. Spatial correlations in parametric down-conversion. Physics Reports 495, 87–139 (2010). Erkmen, B. I. & Shapiro, J. H. Ghost imaging: from quantum to classical to computational. Advances in Optics and Photonics 2, 405–450 (2010). Boto, A. N. et al. Quantum Interferometric Optical Lithography: Exploiting Entanglement to Beat the Diffraction Limit. Physical Review Letters 85, 2733–2736 (2000). Krenn, M. et al. Generation and confirmation of a (100 × 100)-dimensional entangled quantum system. Proceedings of the National Academy of Sciences 111, 6243–6247 (2014). Martin, A. et al. Quantifying Photonic High-Dimensional Entanglement. Physical Review Letters 118, 110501 (2017). Reichert, M., Defienne, H., Sun, X. & Fleischer, J. W. Biphoton transmission through non-unitary objects. J. Opt. 19, 044004 (2017). Moreau, P.-A., Mougin-Sisini, J., Devaux, F. & Lantz, E. Realization of the purely spatial Einstein-Podolsky-Rosen paradox in full-field images of spontaneous parametric down-conversion. Physical Review A 86, 010101 (2012). Edgar, M. P. et al. Imaging high-dimensional spatial entanglement with a camera. Nat. Commun. 3, 984 (2012). Dąbrowski, M., Parniak, M. & Wasilewski, W. Einstein–Podolsky–Rosen paradox in a hybrid bipartite system. Optica 4, 272–275 (2017). Lantz, E., Denis, S., Moreau, P.-A. & Devaux, F. Einstein-Podolsky-Rosen paradox in single pairs of images. Optics Express 23, 26472–26478 (2015). Reichert, M., Sun, X. & Fleischer, J. W. Quality of Spatial Entanglement Propagation. Physical Review A 95, 063836 (2017). Waller, L., Situ, G. & Fleischer, J. W. Phase-space measurement and coherence synthesis of optical beams. Nature Photon. 6, 474–479 (2012). Devaux, F., Mougin-Sisini, J., Moreau, P. A. & Lantz, E. Towards the evidence of a purely spatial Einstein-Podolsky-Rosen paradox in images: measurement scheme and first experimental results. The European Physical Journal D 66, 192 (2012). Moreau, P.-A., Devaux, F. & Lantz, E. Einstein-Podolsky-Rosen Paradox in Twin Images. Physical Review Letters 113, 160401 (2014). Howland, G. A. & Howell, J. C. Efficient High-Dimensional Entanglement Imaging with a Compressive-Sensing Double-Pixel Camera. Physical Review X 3, 011013 (2013). Saleh, B. E. A., Teich, M. C. & Sergienko, A. V. Wolf Equations for Two-Photon Light. Physical Review Letters 94, 223601 (2005). Abouraddy, A. F., Saleh, B. E. A., Sergienko, A. V. & Teich, M. C. Entangled-photon Fourier optics. Journal of the Optical Society of America B 19, 1174–1184 (2002). Kurtsiefer, C., Oberparleiter, M. & Weinfurter, H. High-efficiency entangled photon pair collection in type-II parametric fluorescence. Physical Review A 64, 023802 (2001). Yanhua, S. Entangled biphoton source - property and preparation. Reports on Progress in Physics 66, 1009 (2003). Lantz, E., Blanchet, J.-L., Furfaro, L. & Devaux, F. Multi-imaging and Bayesian estimation for photon counting with EMCCDs. Monthly Notices of the Royal Astronomical Society 386, 2262–2270 (2008). Tasca, D. S., Edgar, M. P., Izdebski, F., Buller, G. S. & Padgett, M. J. Optimizing the use of detector arrays for measuring intensity correlations of photon pairs. Physical Review A 88, 013816 (2013). Andor iXon Ultra EMCCD Specifications, http://www.an\dor.com/pdfs/specifications/Andor_iXon_ULTRA_EMCCD_Specifications.pdf Santos, I. F., Aguirre-Gómez, J. G. & Pádua, S. Comparing quantum imaging with classical second-order incoherent imaging. Physical Review A 77, 043832 (2008). Saleh, B. E. A., Abouraddy, A. F., Sergienko, A. V. & Teich, M. C. Duality between partial coherence and partial entanglement. Physical Review A 62, 043816 (2000). The authors would like to thank Nova Photonics, Inc. for providing equipment used in the experiment. This work was supported by the Air Force Office of Scientific Research grant FA9550-12-1-0054. Department of Electrical Engineering, Princeton University, Princeton, NJ, 08544, USA Matthew Reichert , Hugo Defienne & Jason W. Fleischer Search for Matthew Reichert in: Search for Hugo Defienne in: Search for Jason W. Fleischer in: M.R. and J.W.F. conceived of the experiment; M.R. and H.D. developed the theory and experimental design, and M.R. performed the experiment; all authors analyzed the data and co-wrote the paper. Correspondence to Matthew Reichert or Jason W. Fleischer. The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. https://doi.org/10.1038/s41598-018-26144-7 Spatially entangled photon-pair generation using a partial spatially coherent pump beam Hugo Defienne & Sylvain Gigan Physical Review A (2019) Imaging with quantum states of light Paul-Antoine Moreau , Ermes Toninelli , Thomas Gregory & Miles J. Padgett Nature Reviews Physics (2019) Quantum metasurface for multiphoton interference and state reconstruction Kai Wang , James G. Titchener , Sergey S. Kruk , Lei Xu , Hung-Pin Chung , Matthew Parry , Ivan I. Kravchenko , Yen-Hung Chen , Alexander S. Solntsev , Yuri S. Kivshar , Dragomir N. Neshev & Andrey A. Sukhorukov Optimizing the signal-to-noise ratio of biphoton distribution measurements Adaptive Quantum Optics with Spatially Entangled Photon Pairs , Matthew Reichert Physical Review Letters (2018) Journal Top 100 Scientific Reports menu Scientific Reports Top 100 2017 Scientific Reports Top 10 2018 Guest Edited Collections Editorial Board Highlights Author Highlights Scientific Reports rigorous editorial process Open Access Funding Support
CommonCrawl
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) Total Absorption Spectroscopy Study of the Beta Decay of $^{86}$Br and $^{91}$Rb (1704.01915) S. Rice, A. Algora, J. L. Tain, E. Valencia, J. Agramunt, B. Rubio, W. Gelletly, P.H. Regan, A.-A. Zakari-Issoufou, M. Fallot, A. Porta, J. Rissanen, T. Eronen, J. Äystö, L. Batist, M. Bowry, V. M. Bui, R. Caballero-Folch, D. Cano-Ott, V.-V. Elomaa, E. Estevez, G. F. Farrelly, A. R. Garcia, B. Gomez-Hornillos, V. Gorlychev, J. Hakala, M. D. Jordan, A. Jokinen, F. G. Kondev, T. Martínez, P. Mason, E. Mendoza, I. Moore, H. Penttilä, Zs. Podolyák, M. Reponen, V. Sonnenschein, A. A. Sonzogni, P. Sarriguren April 6, 2017 nucl-ex The beta decays of $^{86}$Br and $^{91}$Rb have been studied using the total absorption spectroscopy technique. The radioactive nuclei were produced at the IGISOL facility in Jyv\"askyl\"a and further purified using the JYFLTRAP. $^{86}$Br and $^{91}$Rb are considered to be major contributors to the decay heat in reactors. In addition $^{91}$Rb was used as a normalization point in direct measurements of mean gamma energies released in the beta decay of fission products by Rudstam {\it et al.} assuming that this decay was well known from high-resolution measurements. Our results show that both decays were suffering from the {\it Pandemonium} effect and that the results of Rudstam {\it et al.} should be renormalized. The relative impact of the studied decays in the prediction of the decay heat and antineutrino spectrum from reactors has been evaluated. Total Absorption Gamma-Ray Spectroscopy of 87Br, 88Br and 94Rb Beta-Delayed Neutron Emitters (1609.06128) E. Valencia, J. L. Tain, A. Algora, J. Agramunt, B. Rubio, S. Rice, W. Gelletly, P. Regan, A.-A. Zakari-Issoufou, M. Fallot, A. Porta, J. Rissanen, T. Eronen, J. Aysto, L. Batist, M. Bowry, V. M. Bui, R. Caballero-Folch, D. Cano-Ott, V.-V. Elomaa, E. Estevez, G. F. Farrelly, A. R. Garcia, B. Gomez-Hornillos, V. Gorlychev, J. Hakala, M.D. Jordan, A. Jokinen, V. S. Kolhinen, F. G. Kondev, T. Martinez, E. Mendoza, I. Moore, H. Penttila, Zs. Podolyak, M. Reponen, V. Sonnenschein, A. A. Sonzogni Sept. 20, 2016 nucl-ex We investigate the decay of 87Br, 88Br and 94Rb using total absorption gamma-ray spectroscopy. These important fission products are beta-delayed neutron emitters. Our data show considerable gamma-intensity, so far unobserved in high-resolution gamma-ray spectroscopy, from states at high excitation energy. We also find significant differences with the beta intensity that can be deduced from existing measurements of the beta spectrum. We evaluate the impact of the present data on reactor decay heat using summation calculations. Although the effect is relatively small it helps to reduce the discrepancy between calculations and integral measurements of the photon component for 235U fission at cooling times in the range 1 to 100 s. We also use summation calculations to evaluate the impact of present data on reactor antineutrino spectra. We find a significant effect at antineutrino energies in the range of 5 to 9 MeV. In addition, we observe an unexpected strong probability for gamma emission from neutron unbound states populated in the daughter nucleus. The gamma branching is compared to Hauser-Feshbach calculations which allow one to explain the large value for bromine isotopes as due to nuclear structure. However the branching for 94Rb, although much smaller, hints of the need to increase the radiative width by one order-of-magnitude. This leads to a similar increase in the calculated (n,gamma) cross section for this very neutron-rich nucleus with a potential impact on r-process abundance calculations. Total Absorption Spectroscopy Study of $^{92}$Rb Decay: A Major Contributor to Reactor Antineutrino Spectrum Shape (1504.05812) A.-A. Zakari-Issoufou, M. Fallot, A. Porta, A. Algora, J.L. Tain, E. Valencia, S. Rice, V.M Bui, S. Cormon, M. Estienne, J. Agramunt, J. Äystö, M. Bowry, J.A. Briz, R. Caballero-Folch, D. Cano-Ott, A. Cucoanes, V.-V. Elomaa, T. Eronen, E. Estévez, G.F. Farrelly, A.R. Garcia, W. Gelletly, M.B Gomez-Hornillos, V. Gorlychev, J. Hakala, A. Jokinen, M.D. Jordan, A. Kankainen, P. Karvonen, V.S. Kolhinen, F.G Kondev, T. Martinez, E. Mendoza, F. Molina, I. Moore, A.B. Perez-Cerdán, Zs. Podolyák, H. Penttilä, P.H. Regan, M. Reponen, J. Rissanen, B. Rubio, T. Shiba, A.A. Sonzogni, C. Weber, IGISOL collaboration Sept. 24, 2015 hep-ex, nucl-ex The antineutrino spectra measured in recent experiments at reactors are inconsistent with calculations based on the conversion of integral beta spectra recorded at the ILL reactor. $^{92}$Rb makes the dominant contribution to the reactor spectrum in the 5-8 MeV range but its decay properties are in question. We have studied $^{92}$Rb decay with total absorption spectroscopy. Previously unobserved beta feeding was seen in the 4.5-5.5 region and the GS to GS feeding was found to be 87.5(25)%. The impact on the reactor antineutrino spectra calculated with the summation method is shown and discussed. Enhanced Gamma-Ray Emission from Neutron Unbound States Populated in Beta Decay (1505.05490) J. L. Tain, E. Valencia, A. Algora, J. Agramunt, B. Rubio, S. Rice, W. Gelletly, P. Regan, A.-A. Zakari-Issoufou, M. Fallot, A. Porta, J. Rissanen, T. Eronen, J. Aysto, L. Batist, M. Bowry, V. M. Bui, R. Caballero-Folch, D. Cano-Ott, V.-V. Elomaa, E. Estevez, G. F. Farrelly, A. R. Garcia, B. Gomez-Hornillos, V. Gorlychev, J. Hakala, M.D. Jordan, A. Jokinen, V. S. Kolhinen, F. G. Kondev, T. Martinez, E. Mendoza, I. Moore, H. Penttila, Zs. Podolyak, M. Reponen, V. Sonnenschein, A. A. Sonzogni May 20, 2015 nucl-ex Total absorption spectroscopy was used to investigate the beta-decay intensity to states above the neutron separation energy followed by gamma-ray emission in 87,88Br and 94Rb. Accurate results were obtained thanks to a careful control of systematic errors. An unexpectedly large gamma intensity was observed in all three cases extending well beyond the excitation energy region where neutron penetration is hindered by low neutron energy. The gamma branching as a function of excitation energy was compared to Hauser-Feshbach model calculations. For 87Br and 88Br the gamma branching reaches 57% and 20% respectively, and could be explained as a nuclear structure effect. Some of the states populated in the daughter can only decay through the emission of a large orbital angular momentum neutron with a strongly reduced barrier penetrability. In the case of neutron-rich 94Rb the observed 4.5% branching is much larger than the calculations performed with standard nuclear statistical model parameters, even after proper correction for fluctuation effects on individual transition widths. The difference can be reconciled introducing an enhancement of one order-of-magnitude in the photon strength to neutron strength ratio. An increase in the photon strength function of such magnitude for very neutron-rich nuclei, if it proved to be correct, leads to a similar increase in the (n,gamma) cross section that would have an impact on r-process abundance calculations. Mass measurements in the vicinity of the doubly-magic waiting point 56Ni (1007.0978) A. Kankainen, V.-V. Elomaa, T. Eronen, D. Gorelov, J. Hakala, A. Jokinen, T. Kessler, V.S. Kolhinen, I.D. Moore, S. Rahaman, M. Reponen, J. Rissanen, A. Saastamoinen, C. Weber, J. Äystö July 6, 2010 nucl-ex Masses of 56,57Fe, 53Co^m, 53,56Co, 55,56,57Ni, 57,58Cu, and 59,60Zn have been determined with the JYFLTRAP Penning trap mass spectrometer at IGISOL with a precision of dm/m \le 3 x 10^{-8}. The QEC values for 53Co, 55Ni, 56Ni, 57Cu, 58Cu, and 59Zn have been measured directly with a typical precision of better than 0.7 keV and Coulomb displacement energies have been determined. The Q values for proton captures on 55Co, 56Ni, 58Cu, and 59Cu have been measured directly. The precision of the proton-capture Q value for 56Ni(p,gamma)57Cu, Q(p,gamma) = 689.69(51) keV, crucial for astrophysical rp-process calculations, has been improved by a factor of 37. The excitation energy of the proton emitting spin-gap isomer 53Co^m has been measured precisely, Ex = 3174.3(10) keV, and a Coulomb energy difference of 133.9(10) keV for the 19/2- state has been obtained. Except for 53Co, the mass values have been adjusted within a network of 17 frequency ratio measurements between 13 nuclides which allowed also a determination of the reference masses 55Co, 58Ni, and 59Cu. Mass Measurements and Implications for the Energy of the High-Spin Isomer in 94Ag (0804.4743) A. Kankainen, V.-V. Elomaa, L. Batist, S. Eliseev, T. Eronen, U. Hager, J. Hakala, A. Jokinen, I.D. Moore, Yu.N. Novikov, H. Penttilä, A. Popov, S. Rahaman, S. Rinta-Antila, J. Rissanen, A. Saastamoinen, D.M. Seliverstov, T. Sonoda, G. Vorobjev, C. Weber, J. Äystö Oct. 9, 2008 nucl-ex Nuclides in the vicinity of 94Ag have been studied with the Penning trap mass spectrometer JYFLTRAP at the Ion-Guide Separator On-Line. The masses of the two-proton-decay daughter 92Rh and the beta-decay daughter 94Pd of the high-spin isomer in 94Ag have been measured, and the masses of 93Pd and 94Ag have been deduced. When combined with the data from the one-proton or two-proton-decay experiments, the results lead to contradictory mass excess values for the high-spin isomer in 94Ag, -46370(170) or -44970(100) keV, corresponding to excitation energies of 6960(400) or 8360(370) keV, respectively. Electron-capture branch of 100Tc and tests of nuclear wave functions for double-beta decays (0809.3757) S.K.L. Sjue, D. Melconian, A. Garcia, I. Ahmad, A. Algora, J. Aysto, V.-V. Elomaa, T. Eronen, J. Hakala, S. Hoedl, A. Kankainen, T. Kessler, I.D. Moore, F. Naab, H. Penttila, S. Rahaman, A. Saastamoinen, H.E. Swanson, C. Weber, S. Triambak, K. Deryckx We present a measurement of the electron-capture branch of $^{100}$Tc. Our value, $B(\text{EC}) = (2.6 \pm 0.4) \times 10^{-5}$, implies that the $^{100}$Mo neutrino absorption cross section to the ground state of $^{100}$Tc is roughly one third larger than previously thought. Compared to previous measurements, our value of $B(\text{EC})$ prevents a smaller disagreement with QRPA calculations relevant to double-$\beta$ decay matrix elements. Mass measurements in the vicinity of the rp-process and the nu p-process paths with JYFLTRAP and SHIPTRAP (0808.4065) C. Weber, V.-V. Elomaa, R. Ferrer, C. Fröhlich, D. Ackermann, J. Äystö, G. Audi, L. Batist, K. Blaum, M. Block, A. Chaudhuri, M. Dworschak, S. Eliseev, T. Eronen, U. Hager, J. Hakala, F. Herfurth, F.P. Heßberger, S. Hofmann, A. Jokinen, A. Kankainen, H.-J. Kluge, K. Langanke, A. Martín, G. Martínez-Pinedo, M. Mazzocco, I.D. Moore, J.B. Neumayr, Yu.N. Novikov, H. Penttilä, W.R. Plaß, A.V. Popov, S. Rahaman, T. Rauscher, C. Rauth, J. Rissanen, D. Rodríguez, A. Saastamoinen, C. Scheidenberger, L. Schweikhard, D.M. Seliverstov, T. Sonoda, F.-K. Thielemann, P.G. Thirolf, G.K. Vorobjev Aug. 29, 2008 nucl-ex The masses of very neutron-deficient nuclides close to the astrophysical rp- and nu p-process paths have been determined with the Penning trap facilities JYFLTRAP at JYFL/Jyv\"askyl\"a and SHIPTRAP at GSI/Darmstadt. Isotopes from yttrium (Z = 39) to palladium (Z = 46) have been produced in heavy-ion fusion-evaporation reactions. In total 21 nuclides were studied and almost half of the mass values were experimentally determined for the first time: 88Tc, 90-92Ru, 92-94Rh, and 94,95Pd. For the 95Pdm, (21/2^+) high-spin state, a first direct mass determination was performed. Relative mass uncertainties of typically $\delta m / m = 5 \times 10^{-8}$ were obtained. The impact of the new mass values has been studied in nu p-process nucleosynthesis calculations. The resulting reaction flow and the final abundances are compared to those obtained with the data of the Atomic Mass Evaluation 2003. Precise half-life measurement of the 26Si ground state (0801.4125) I. Matea, J. Souin, J. Aysto, B. Blank, P. Delahaye, V.-V. Elomaa, T. Eronen, J. Giovinazzo, U. Hager, J. Hakala, J. Huikari, A. Jokinen, A. Kankainen, I.D. Moore, J.-L. Pedroza, S. Rahaman, J. Rissanen, J. Ronkainen, A. Saastamoinen, T. Sonoda, C. Weber Aug. 7, 2008 nucl-ex The beta-decay half-life of 26Si was measured with a relative precision of 1.4*10e3. The measurement yields a value of 2.2283(27) s which is in good agreement with previous measurements but has a precision that is better by a factor of 4. In the same experiment, we have also measured the non-analogue branching ratios and could determine the super-allowed one with a precision similar to the previously reported measurements. The experiment was done at the Accelerator Laboratory of the University of Jyvaskyla where we used the IGISOL technique with the JYFLTRAP facility to separate pure samples of 26Si. Evolution of the N=50 shell gap energy towards $^{78}$Ni (0806.4489) J. Hakala, S. Rahaman, V.-V. Elomaa, T. Eronen, U. Hager, A. Jokinen, A. Kankainen, I. D. Moore, H. Penttilä, S. Rinta-Antila, J. Rissanen, A. Saastamoinen, T. Sonoda, C. Weber, J. Äystö June 27, 2008 nucl-ex Atomic masses of the neutron-rich isotopes $^{76-80}$Zn, $^{78-83}$Ga, $^{80-85}Ge, $^{81-87}$As and $^{84-89}$Se have been measured with high precision using the Penning trap mass spectrometer JYFLTRAP at the IGISOL facility. The masses of $^{82,83}$Ga, $^{83-85}$Ge, $^{84-87}$As and $^{89}$Se were measured for the first time. These new data represent a major improvement in the knowledge of the masses in this neutron-rich region. Two-neutron separation energies provide evidence for the reduction of the N=50 shell gap energy towards germanium Z=32 and a subsequent increase at gallium (Z=31). The data are compared with a number of theoretical models. An indication of the persistent rigidity of the shell gap towards nickel (Z=28) is obtained. Preparing isomerically pure beams of short-lived nuclei at JYFLTRAP (0801.2904) T. Eronen, V.-V. Elomaa, U. Hager, J. Hakala, A. Jokinen, A. Kankainen, S. Rahaman, J. Rissanen, C. Weber, J. Aysto Jan. 18, 2008 nucl-ex A new procedure to prepare isomerically clean samples of ions with a mass resolving power of more than 100,000 has been developed at the JYFLTRAP tandem Penning trap system. The method utilises a dipolar rf-excitation of the ion motion with separated oscillatory fields in the precision trap. During a subsequent retransfer to the purification trap, the contaminants are rejected and as a consequence, the remaining bunch is isomerically cleaned. This newly-developed method is suitable for very high-resolution cleaning and is at least a factor of five faster than the methods used so far in Penning trap mass spectrometry. Mass measurements of neutron-rich nuclei at JYFLTRAP (0801.0487) S. Rahaman, V.-V. Elomaa, T. Eronen, U. Hager, J. Hakala, A. Jokinen, A. Kankainen, J. Rissanen, C. Weber, J. Aysto, the IGISOL group Jan. 3, 2008 nucl-ex The JYFLTRAP mass spectrometer was used to measure the masses of neutron-rich nuclei in the region between N = 28 to N = 82 with uncertainties better than 10 keV. The impacts on nuclear structure and the r-process paths are reviewed. Q_EC values of the Superallowed beta Emitters 50Mn and 54Co (0712.3463) T. Eronen, V.-V. Elomaa, U. Hager, J. Hakala, J. C. Hardy, A. Jokinen, A. Kankainen, I. D. Moore, H. Penttila, S. Rahaman, S. Rinta-Antila, J. Rissanen, A. Saastamoinen, T. Sonoda, C. Weber, J. Aysto Dec. 20, 2007 nucl-ex Using a new fast cleaning procedure to prepare isomerically pure ion samples, we have measured the beta-decay Q_EC values of the superallowed beta-emitters 50Mn and 54Co to be 7634.48(7) keV and 8244.54(10) keV, respectively, results which differ significantly from the previously accepted values. The corrected Ft values derived from our results strongly support new isospin-symmetry-breaking corrections that lead to a higher value of the up-down quark mixing element, Vud, and improved confirmation of the unitarity of the Cabibbo-Kobayashi-Maskawa matrix. Q value of the 100Mo Double-Beta Decay (0712.3337) S. Rahaman, V.-V. Elomaa, T. Eronen, J. Hakala, A. Jokinen, J. Julin, A. Kankainen, A. Saastamoinen, J. Suhonen, C. Weber, J. Äystö Penning trap measurements using mixed beams of 100Mo - 100Ru and 76Ge - 76Se have been utilized to determine the double-beta decay Q-values of 100Mo and 76Ge with uncertainties less than 200 eV. The value for 76Ge, 2039.04(16) keV is in agreement with the published SMILETRAP value. The new value for 100Mo, 3034.40(17) keV is 30 times more precise than the previous literature value, sufficient for the ongoing neutrinoless double-beta decay searches in 100Mo. Moreover, the precise Q-value is used to calculate the phase-space integrals and the experimental nuclear matrix element of double-beta decay. Precise atomic masses of neutron-rich Br and Rb nuclei close to the r-process path (nucl-ex/0703017) S. Rahaman, U. Hager, V.-V. Elomaa, T. Eronen, J. Hakala, A. Jokinen, A. Kankainen, P. Karvonen, I.D. Moore, H. Penttila, S. Rinta-Antila, J. Rissanen, A. Saastamoinen, T. Sonoda, J. Aysto March 12, 2007 nucl-ex The Penning trap mass spectrometer JYFLTRAP, coupled to the Ion-Guide Isotope Separator On-Line (IGISOL) facility at Jyvaskyla, was employed to measure the atomic masses of neutron rich 85 to 92Br and 94 to 97Rb isotopes with a typical accuracy less than 10 keV. Discrepancies with the older data are discussed. Comparison to different mass models is presented. Details of nuclear structure, shell and subshell closures are investigated by studying the two-neutron separation energy and the shell gap energy. Precision mass measurements of radioactive nuclei at JYFLTRAP (nucl-ex/0703018) S. Rahaman, V.-V. Elomaa, T. Eronen, U. Hager, J. Hakala, A. Jokinen, A. Kankainen, I.D. Moore, H. Penttila, S.Rinta-Antila, J. Rissanen, A. Saastamoinen, T. Sonoda, C. Weber, J. Aysto The Penning trap mass spectrometer JYFLTRAP was used to measure the atomic masses of radioactive nuclei with an uncertainty better than 10 keV. The atomic masses of the neutron-deficient nuclei around the N = Z line were measured to improve the understanding of the rp-process path and the SbSnTe cycle. Furthermore, the masses of the neutron-rich gallium (Z = 31) to palladium (Z = 46) nuclei have been measured. The physics impacts on the nuclear structure and the r-process paths are reviewed. A better understanding of the nuclear deformation is presented by studying the pairing energy around A = 100.
CommonCrawl
Zero-knowledge transfer of value protocol II [closed] This is an improvement of the protocol described here. The protocol does not require trusted setup and is very efficient (much more efficient than anything else I could find). The protocol allows the following: Alice and Bob hold secret values $a$ and $b$ respectively. Alice can "transfer" a part of her value to Bob such that whatever she transfers must be subtracted from her value (the sum of their value must remain the same). For example, if she has $10$ and Bob has $5$, after transferring $2$, she should have $8$ and Bob should have $7$. Victor is an independent observer and must be able to verify that the sum of the values does not change as a result of the transfer. But he should do so without learning any of the amounts involved. If anyone sees holes in the protocol described below, or, if there is a better (more efficient) way to do it, I would greatly appreciate feedback. To facilitate the transfer, two elliptic curves of different orders are used. These curves are $g_1$ and $g_2$ of orders $q_1$ and $q_2$ and generators $G_1$ and $G_2$. The orders $q_1$ and $q_2$ should be distinct but very close to each other, with $q_1 < q_2$. Using these curves, Alice and Bob can create public commitments to their values. For example, committing to value $a$ would work as follows: Map value $a$ to points on both curves as $A_1 = a ⋅ G_1$ and $A_2 = a ⋅ G_2$ Choose a random value $r$ and calculate $T=r⋅G_1$, $S=r⋅G_2$ Calculate $u=H(G_1,G_2,A_1,A_2,T,S)$, where $H$ is a hash function Calculate $v=r+a⋅u$ such that $\frac{v}{u} < q_1$. This may require iterating through several values of $r$ The commitment to $a$ is then defined as $A = (A_1,A_2, T, S, v)$ An observer can verify that $A_1$ and $A_2$ are derived from the same $a$ and that $a < q_1$ by doing the following: Compute $u$ Check that $\frac{v}{u} < q_1$ Check that $v⋅G_1=T+u⋅A_1$ and $v⋅G_2=S+u⋅A_2$ Alice has publicly committed to $a$ using the methodology described above and publishing $A = (A_1,A_2, T_a, S_a, v_a)$. We assume that is known for a fact that $A_1$ and $A_2$ refer to the same value $a$. Bob has publicly committed to $b$ using the methodology described above and publishing $B = (B_1,B_2, T_b, S_b, v_b)$. We assume that it is known for a fact that $B_1$ and $B_2$ refer to the same value $b$. Alice wants to transfer value $k$ to Bob such that $a' = a - k$ and $b' = b + k$. To do this, she does the following: Calculates commitment for the new value $a'$ as $A' = (A_1', A_2', T_{a'}, S_{a'}, v_{a'})$ Calculates commitment for value $k$ as $K = (K_1,K_2,T_k,S_k,v_k)$ Makes both commitments public by publishing $A'$ and $K$ Bob receives value $k$ from Alice via a secure channel and does the following: Verifies that $K_1 = k ⋅ G_1$ and $K_2 = k ⋅ G_2$ Calculates his new value $b'$ as $b' = b + k$ Calculates commitment to $b'$ as $B' = (B_1', B_2', T_{b'}, S_{b'}, v_{b'})$ Makes the new commitment to $b$ public by sharing $B'$ An independent observer (Victor) can verify that the total value in the system didn't change by doing the following: Verify that each pair of points $(A_1', A_2'), (B_1', B_2'), (K_1, K_2)$ was derived from the same values and that each of the underlying values is less than $q_1$ using the methodology described previously Verify that $A_1 = A_1' + K_1$ and $A_2 = A_2' + K_2$ Verify that $B_1' = B_1 + K_1$ and $B_2' = B_2 + K_2$ The scheme above should be secure because all values are effectively padded. The numbers are 256-bit integers that have the following properties: A single "coin" consists $10^{69}$ indivisible "units" This implies that a single coin requires 230 bits to express When transferring values, users randomize the lower bits (e.g. lower 220 bits) such that the same value is never sent twice So, effectively, instead of transferring something like $2$, Alice would be transferring something like $2.0000003487094035343043$ Note: to prevent forgery, Alice and Bob could also sign their commitments - but this is not covered by the description above. elliptic-curves protocol-design zero-knowledge-proofs irakliy irakliyirakliy closed as off-topic by fkraiem, Ella Rose♦, e-sushi Sep 9 '18 at 14:55 "Requests for analyzing ciphertext or reviewing full cryptographic designs are off-topic, as the results are rarely useful to anyone else and/or would be too long for this site." – fkraiem, Ella Rose, e-sushi A few remarks: You should try to abstract out the primitives you are using, instead of describing everything "from scratch". Typically, you describe a non-interactive zero-knowledge proof for the relation $\exists a, A_1 = a\cdot G_1 \wedge A_2 = a\cdot G_2$. Rather than giving the exact description of this proof from the Fiat-Shamir transform applied to a $\Sigma$-protocol (which is what you're doing), it would be better to just say "Alice publicly proves that this relation is satisfied using a non-interactive zero-knowledge proof". This abstracts the actual properties you want from the concrete implementation you consider, which should make it considerably easier to analyze security. Similarly, you should try to simplify your problem as much as you can. For example here you're adding signatures to authenticate Alice and Bob; but this is not the core of your problem. You could simply assume that Alice and Bob are interacting over authenticated channel, separating your specific problem from the unrelated problem of actually authenticated players with signatures. To get a security result, you need to first formalize the exact security guarantee you want from the protocol. The current description that you give is not formal enough to exactly state the precise security property that your protocol should satisfy. I see a few issues with the current protocol. First, your protocol would not work if $a,b,k$ do not have a lot of entropy (e.g. if they are small integers, as in your example). For example, given $G_1$ and $K_1 = k\cdot G_1$, Victor could easily try to compute $i\cdot G_1$ for many values of $i$, and will find $k$ this way unless $k$ comes from a high-entropy distribution. Usually, one relies on Pedersen commitments of the form $m\cdot G_1 + r\cdot H_1$ for some random $r$ instead of just $k\cdot G_1$ to fix this kind of issues, so as to perfectly hide $k$ (and not simply making it hard to find it when it has a lot of entropy). Second, there is a trivial way to break your proof that the same $a$ is used in $A_1$ and $A_2$: assuming that $\gcd(q_1,q_2) = 1$ (which will be the case if they are different primes), one can easily find for any pair $(a_1, a_2)$ over $\mathbb{Z}_{q_1}\times \mathbb{Z}_{q_2}$ a value $a$ such that $a = a_1 \bmod q_1$ and $a = a_2 \bmod q_2$. Then one can use this $a$ to prove that $A_1 = a\cdot G_1$ and $A_2 = a\cdot G_2$ (as it's indeed the case). That means that the proof can always be constructed for any $A_1 = a_1\cdot G_1, A_2 = a_2\cdot G_2$, for arbitrary values $a_1, a_2$. Hence, this proof does in fact not give any useful information. What happens if $K_e$ is not really an encryption of the value $k$ such that $K_1 = k\cdot G_1$? What proves to Victor that Alice and Bob are actually using this value $k$? Nothing seems to bind the value $k$ to the value encrypted in $K_e$. Unless I did not understood well, what you want can be realized as follows: (I abstract out the details) there are two public homomorphic commitments, to $a$ and $b$. Alice sends a commitment to $k$, allowing Victor to publicly compute commitments to $a-k$ and $b-k$ using the homomorphic properties, and Alice securely sends k to Bob (using e.g. a secure key exchanged followed by some encryption of $k$), while proving publicly that what she securely sent to Bob is indeed the committed value $k$. You should try to build what you want with this kind of reasoning, using the abstract primitives you need and formalizing the exact security properties you want to guarantee. Currently, I really don't get what you're trying to achieve by using two different curves with two different moduli (could you explain why you want those different moduli?). edited Jul 2 at 17:52 Geoffroy CouteauGeoffroy Couteau $\begingroup$ Thank you very much for such a through answer. I agree that my question could be much improved in terms of clarify. To answer some of the points you brought up: (1) as I mentioned toward the end of the question, all values are high-entropy values; (2) $K_e$ is only needed for communicating value of $k$ to Bob. In theory, this could be done over a separate encrypted channel - and it wouldn't affect the rest of the protocol; (3) the reasons for using 2 elliptic curves is described here $\endgroup$ – irakliy Jul 5 '18 at 21:06 $\begingroup$ Basically, assuming you know that both $A_1$ and $A_2$ are derived from the same number $a$, I don't think it's possible to come up with $(A_1', A_2')$ and $(K_1, K_2)$ such that $A_1 = A_1' + K_1$ AND $A_2 = A_2' + K_2$ if $(A_1', A_2')$ and $(K_1, K_2)$ are not backed by the same numbers $a'$ and $k$. But do let me know if I'm wrong on this. $\endgroup$ – irakliy Jul 5 '18 at 21:23 $\begingroup$ I updated my question to remove some of the things you suggested (e.g. signatures). $\endgroup$ – irakliy Jul 5 '18 at 22:09 Not the answer you're looking for? Browse other questions tagged elliptic-curves protocol-design zero-knowledge-proofs or ask your own question. Zero-knowledge transfer of value protocol inspired by EC El Gamal Data-validating protocol Derive a public EC key from two public EC keys Bit commitment with 1-out-of-2 oblivious transfer For the discrete log zero-knowledge proof, can all challenges be done at once? Zero knowledge proof of exponents Commutative homomorphic encryption for zero-knowledge transfers Zero-knowledge integer comparison Implementing key mapping across different elliptic curves Proving that a point on elliptic curve is smaller than half of group's order
CommonCrawl
(-) Remove Israel (5) filter Israel (5) (-) Remove Italy (2) filter Italy (2) Project acronym BeyondA1 Project Set theory beyond the first uncountable cardinal Researcher (PI) Assaf Shmuel Rinot Summary We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah. While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions. A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that. To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success. We propose to establish a research group that will unveil the combinatorial nature of the second uncountable cardinal. This includes its Ramsey-theoretic, order-theoretic, graph-theoretic and topological features. Among others, we will be directly addressing fundamental problems due to Erdos, Rado, Galvin, and Shelah. While some of these problems are old and well-known, an unexpected series of breakthroughs from the last three years suggest that now is a promising point in time to carry out such a project. Indeed, through a short period, four previously unattainable problems concerning the second uncountable cardinal were successfully tackled: Aspero on a club-guessing problem of Shelah, Krueger on the club-isomorphism problem for Aronszajn trees, Neeman on the isomorphism problem for dense sets of reals, and the PI on the Souslin problem. Each of these results was obtained through the development of a completely new technical framework, and these frameworks could now pave the way for the solution of some major open questions. A goal of the highest risk in this project is the discovery of a consistent (possibly, parameterized) forcing axiom that will (preferably, simultaneously) provide structure theorems for stationary sets, linearly ordered sets, trees, graphs, and partition relations, as well as the refutation of various forms of club-guessing principles, all at the level of the second uncountable cardinal. In comparison, at the level of the first uncountable cardinal, a forcing axiom due to Foreman, Magidor and Shelah achieves exactly that. To approach our goals, the proposed project is divided into four core areas: Uncountable trees, Ramsey theory on ordinals, Club-guessing principles, and Forcing Axioms. There is a rich bilateral interaction between any pair of the four different cores, but the proposed division will allow an efficient allocation of manpower, and will increase the chances of parallel success. Project acronym EffectiveTG Project Effective Methods in Tame Geometry and Applications in Arithmetic and Dynamics Researcher (PI) Gal BINYAMINI Summary Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach. Tame geometry studies structures in which every definable set has a finite geometric complexity. The study of tame geometry spans several interrelated mathematical fields, including semialgebraic, subanalytic, and o-minimal geometry. The past decade has seen the emergence of a spectacular link between tame geometry and arithmetic following the discovery of the fundamental Pila-Wilkie counting theorem and its applications in unlikely diophantine intersections. The P-W theorem itself relies crucially on the Yomdin-Gromov theorem, a classical result of tame geometry with fundamental applications in smooth dynamics. It is natural to ask whether the complexity of a tame set can be estimated effectively in terms of the defining formulas. While a large body of work is devoted to answering such questions in the semialgebraic case, surprisingly little is known concerning more general tame structures - specifically those needed in recent applications to arithmetic. The nature of the link between tame geometry and arithmetic is such that any progress toward effectivizing the theory of tame structures will likely lead to effective results in the domain of unlikely intersections. Similarly, a more effective version of the Yomdin-Gromov theorem is known to imply important consequences in smooth dynamics. The proposed research will approach effectivity in tame geometry from a fundamentally new direction, bringing to bear methods from the theory of differential equations which have until recently never been used in this context. Toward this end, our key goals will be to gain insight into the differential algebraic and complex analytic structure of tame sets; and to apply this insight in combination with results from the theory of differential equations to effectivize key results in tame geometry and its applications to arithmetic and dynamics. I believe that my preliminary work in this direction amply demonstrates the feasibility and potential of this approach. Project acronym HomDyn Project Homogenous dynamics, arithmetic and equidistribution Researcher (PI) Elon Lindenstrauss Host Institution (HI) THE HEBREW UNIVERSITY OF JERUSALEM Summary We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued. We consider the dynamics of actions on homogeneous spaces of algebraic groups, and propose to tackle a wide range of problems in the area, including the central open problems. One main focus in our proposal is the study of the intriguing and somewhat subtle rigidity properties of higher rank diagonal actions. We plan to develop new tools to study invariant measures for such actions, including the zero entropy case, and in particular Furstenberg's Conjecture about $\times 2,\times 3$-invariant measures on $\R / \Z$. A second main focus is on obtaining quantitative and effective equidistribution and density results for unipotent flows, with emphasis on obtaining results with a polynomial error term. One important ingredient in our study of both diagonalizable and unipotent actions is arithmetic combinatorics. Interconnections between these subjects and arithmetic equidistribution properties, Diophantine approximations and automorphic forms will be pursued. Project acronym NBEB-SSP Project Nonparametric Bayes and empirical Bayes for species sampling problems: classical questions, new directions and related issues Researcher (PI) Stefano FAVARO Summary Consider a population of individuals belonging to different species with unknown proportions. Given an initial (observable) random sample from the population, how do we estimate the number of species in the population, or the probability of discovering a new species in one additional sample, or the number of hitherto unseen species that would be observed in additional unobservable samples? These are archetypal examples of a broad class of statistical problems referred to as species sampling problems (SSP), namely: statistical problems in which the objects of inference are functionals involving the unknown species proportions and/or the species frequency counts induced by observable and unobservable samples from the population. SSPs first appeared in ecology, and their importance has grown considerably in the recent years driven by challenging applications in a wide range of leading scientific disciplines, e.g., biosciences and physical sciences, engineering sciences, machine learning, theoretical computer science and information theory, etc. The objective of this project is the introduction and a thorough investigation of new nonparametric Bayes and empirical Bayes methods for SSPs. The proposed advances will include: i) addressing challenging methodological open problems in classical SSPs under the nonparametric empirical Bayes framework, which is arguably the most developed (currently most implemented by practitioners) framework do deal with classical SSPs; fully exploiting and developing the potential of tools from mathematical analysis, combinatorial probability and Bayesian nonparametric statistics to set forth a coherent modern approach to classical SSPs, and then investigating the interplay between this approach and its empirical counterpart; extending the scope of the above studies to more challenging SSPs, and classes of generalized SSPs, that have emerged recently in the fields of biosciences and physical sciences, machine learning and information theory. Consider a population of individuals belonging to different species with unknown proportions. Given an initial (observable) random sample from the population, how do we estimate the number of species in the population, or the probability of discovering a new species in one additional sample, or the number of hitherto unseen species that would be observed in additional unobservable samples? These are archetypal examples of a broad class of statistical problems referred to as species sampling problems (SSP), namely: statistical problems in which the objects of inference are functionals involving the unknown species proportions and/or the species frequency counts induced by observable and unobservable samples from the population. SSPs first appeared in ecology, and their importance has grown considerably in the recent years driven by challenging applications in a wide range of leading scientific disciplines, e.g., biosciences and physical sciences, engineering sciences, machine learning, theoretical computer science and information theory, etc. The objective of this project is the introduction and a thorough investigation of new nonparametric Bayes and empirical Bayes methods for SSPs. The proposed advances will include: i) addressing challenging methodological open problems in classical SSPs under the nonparametric empirical Bayes framework, which is arguably the most developed (currently most implemented by practitioners) framework do deal with classical SSPs; fully exploiting and developing the potential of tools from mathematical analysis, combinatorial probability and Bayesian nonparametric statistics to set forth a coherent modern approach to classical SSPs, and then investigating the interplay between this approach and its empirical counterpart; extending the scope of the above studies to more challenging SSPs, and classes of generalized SSPs, that have emerged recently in the fields of biosciences and physical sciences, machine learning and information theory. Project acronym PATHWISE Project Pathwise methods and stochastic calculus in the path towards understanding high-dimensional phenomena Researcher (PI) Ronen ELDAN Summary Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging. In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter. We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks. Concepts from the theory of high-dimensional phenomena play a role in several areas of mathematics, statistics and computer science. Many results in this theory rely on tools and ideas originating in adjacent fields, such as transportation of measure, semigroup theory and potential theory. In recent years, a new symbiosis with the theory of stochastic calculus is emerging. In a few recent works, by developing a novel approach of pathwise analysis, my coauthors and I managed to make progress in several central high-dimensional problems. This emerging method relies on the introduction of a stochastic process which allows one to associate quantities and properties related to the high-dimensional object of interest to corresponding notions in stochastic calculus, thus making the former tractable through the analysis of the latter. We propose to extend this approach towards several long-standing open problems in high dimensional probability and geometry. First, we aim to explore the role of convexity in concentration inequalities, focusing on three central conjectures regarding the distribution of mass on high dimensional convex bodies: the Kannan-Lov'asz-Simonovits (KLS) conjecture, the variance conjecture and the hyperplane conjecture as well as emerging connections with quantitative central limit theorems, entropic jumps and stability bounds for the Brunn-Minkowski inequality. Second, we are interested in dimension-free inequalities in Gaussian space and on the Boolean hypercube: isoperimetric and noise-stability inequalities and robustness thereof, transportation-entropy and concentration inequalities, regularization properties of the heat-kernel and L_1 versions of hypercontractivity. Finally, we are interested in developing new methods for the analysis of Gibbs distributions with a mean-field behavior, related to the new theory of nonlinear large deviations, and towards questions regarding interacting particle systems and the analysis of large networks. Project acronym SensStabComp Project Sensitivity, Stability, and Computation Researcher (PI) Gil KALAI Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA Summary Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice. Noise sensitivity and noise stability of Boolean functions, percolation, and other models were introduced in a paper by Benjamini, Kalai, and Schramm (1999) and were extensively studied in the last two decades. We propose to extend this study to various stochastic and combinatorial models, and to explore connections with computer science, quantum information, voting methods and other areas. The first goal of our proposed project is to push the mathematical theory of noise stability and noise sensitivity forward for various models in probabilistic combinatorics and statistical physics. A main mathematical tool, going back to Kahn, Kalai, and Linial (1988), is applications of (high-dimensional) Fourier methods, and our second goal is to extend and develop these discrete Fourier methods. Our third goal is to find applications toward central old-standing problems in combinatorics, probability and the theory of computing. The fourth goal of our project is to further develop the ``argument against quantum computers'' which is based on the insight that noisy intermediate scale quantum computing is noise stable. This follows the work of Kalai and Kindler (2014) for the case of noisy non-interacting bosons. The fifth goal of our proposal is to enrich our mathematical understanding and to apply it, by studying connections of the theory with various areas of theoretical computer science, and with the theory of social choice. Project acronym TechChange Project Technological Change: New Sources, Consequences, and Impact Mitigation Researcher (PI) Philipp Albert Theodor Kircher Summary Technological change in information technology has the potential of transforming the production process of firms. Some processes such as automation affect workers directly, while others such as the introduction of automated workflow and control tools simply allow firms to grow to bigger sizes. We call the latter Quantity-Biased Technological Change (QBTC). The existing literature has done relatively little to understand the effects of such technological progress that affects the size and structure of firms and thereby indirectly the workforce. We aim to study this type of technological change, and especially how it affects the workforce in terms of employment and wage inequality. We aim to explore this intuitive idea in a model with highly heterogeneous firms and workers and decreasing returns to firm size. When technology enables firms to manage more workers, productive firms are now less limited by size considerations and tend to expand, affecting the marginal product of the workers. A preliminary calibration that aims to explains changes in the firm size, wage, and profit distribution in Germany over the last 15 years shows evidence of quantitative importance of the channel and its interaction with other effects, such as Skill-Biased Technological Change (SBTC). Yet there remain many obstacles: accounting for worker heterogeneity within the firm, exploring issues of market power, micro-founding the source of QBTC, and possibly linking it to new innovations such as the rise of artificial intelligence in firm management. In addition, we propose to revisit past periods of technological change and related policy interventions to gain insights for the future. To achieve this, we discuss how to analyze the past impact on regions with different industrial and occupational compositions. On top, we aim to explore a novel methodology to identify which and how many workers will be affected. Technological change in information technology has the potential of transforming the production process of firms. Some processes such as automation affect workers directly, while others such as the introduction of automated workflow and control tools simply allow firms to grow to bigger sizes. We call the latter Quantity-Biased Technological Change (QBTC). The existing literature has done relatively little to understand the effects of such technological progress that affects the size and structure of firms and thereby indirectly the workforce. We aim to study this type of technological change, and especially how it affects the workforce in terms of employment and wage inequality. We aim to explore this intuitive idea in a model with highly heterogeneous firms and workers and decreasing returns to firm size. When technology enables firms to manage more workers, productive firms are now less limited by size considerations and tend to expand, affecting the marginal product of the workers. A preliminary calibration that aims to explains changes in the firm size, wage, and profit distribution in Germany over the last 15 years shows evidence of quantitative importance of the channel and its interaction with other effects, such as Skill-Biased Technological Change (SBTC). Yet there remain many obstacles: accounting for worker heterogeneity within the firm, exploring issues of market power, micro-founding the source of QBTC, and possibly linking it to new innovations such as the rise of artificial intelligence in firm management. In addition, we propose to revisit past periods of technological change and related policy interventions to gain insights for the future. To achieve this, we discuss how to analyze the past impact on regions with different industrial and occupational compositions. On top, we aim to explore a novel methodology to identify which and how many workers will be affected. Project acronym VALURED Project Value Judgments and Redistribution Policies Researcher (PI) Paolo Giovanni PIACQUADIO Summary Heterogeneity and diversity are a pervasive aspect of modern societies. Differences in individuals' preferences, needs, skills, and information are key to explain variation in individuals' behavior and to anticipate individuals' responses to policy changes. There is no consensus, however, about how to take these differences into account when evaluating policies. Project VALURED will reexamine this ethical challenge by characterizing the mapping between value judgments—i.e. principles of distributive justice—and redistribution policies. This mapping is tremendously important for welfare analysis and policy design. First, it associates the most desirable policy to each set of value judgments, providing an "ethical menu" to policy design. Second, it gives an ethical identity of each policy proposal, that is, it identifies the value judgments a policymaker endorses when proposing a specific policy. The main objectives of VALURED are to: 1) identify transparent and compelling value judgments that accommodate heterogeneity and diversity; 2) show the implications of these value judgments for the evaluation and design of redistribution policies; 3) characterize welfare criteria that respect individuals' preferences and account for individuals' differences in needs, skills, and information; 4) provide new insights for the design of income, capital, and inheritance taxation; 5) develop simple formulas that express optimal policies as a function of observable heterogeneity and ethical parameters. Project VALURED combines welfare economics with public economics. The first part deals with income taxation and addresses the ethical challenges related to individuals' heterogeneity in preferences, needs, and skills. The second part focuses on capital taxation and addresses individuals' differences in risk preferences and information. The third part analyses the design of inheritance taxation and addresses the social concerns for intergenerational and intragenerational equity. Heterogeneity and diversity are a pervasive aspect of modern societies. Differences in individuals' preferences, needs, skills, and information are key to explain variation in individuals' behavior and to anticipate individuals' responses to policy changes. There is no consensus, however, about how to take these differences into account when evaluating policies. Project VALURED will reexamine this ethical challenge by characterizing the mapping between value judgments—i.e. principles of distributive justice—and redistribution policies. This mapping is tremendously important for welfare analysis and policy design. First, it associates the most desirable policy to each set of value judgments, providing an "ethical menu" to policy design. Second, it gives an ethical identity of each policy proposal, that is, it identifies the value judgments a policymaker endorses when proposing a specific policy. The main objectives of VALURED are to: 1) identify transparent and compelling value judgments that accommodate heterogeneity and diversity; 2) show the implications of these value judgments for the evaluation and design of redistribution policies; 3) characterize welfare criteria that respect individuals' preferences and account for individuals' differences in needs, skills, and information; 4) provide new insights for the design of income, capital, and inheritance taxation; 5) develop simple formulas that express optimal policies as a function of observable heterogeneity and ethical parameters. Project VALURED combines welfare economics with public economics. The first part deals with income taxation and addresses the ethical challenges related to individuals' heterogeneity in preferences, needs, and skills. The second part focuses on capital taxation and addresses individuals' differences in risk preferences and information. The third part analyses the design of inheritance taxation and addresses the social concerns for intergenerational and intragenerational equity.
CommonCrawl
back to index | new Let $f(x)$ be a third-degree polynomial with real coefficients satisfying $$|f(1)|=|f(2)|=|f(3)|=|f(5)|=|f(6)|=|f(7)|=12.$$ Find $|f(0)|$. Let $A = \{1, 2, 3, 4, 5, 6, 7\}$, and let $N$ be the number of functions $f$ from set $A$ to set $A$ such that $f(f(x))$ is a constant function. Find the remainder when $N$ is divided by $1000$. Let $f_1(x) = \frac23 - \frac3{3x+1}$, and for $n \ge 2$, define $f_n(x) = f_1(f_{n-1}(x))$. The value of $x$ that satisfies $f_{1001}(x) = x-3$ can be expressed in the form $\frac mn$, where $m$ and $n$ are relatively prime positive integers. Find $m+n$. Given the function $f(x) = 2x^2 - 3x + 7$ with domain {$-2, -1, 3, 4$}, what is the largest integer in the range of $f$? For every positive integer $n$, let $\text{mod}_5 (n)$ be the remainder obtained when $n$ is divided by 5. Define a function $f: \{0,1,2,3,\dots\} \times \{0,1,2,3,4\} \to \{0,1,2,3,4\}$ recursively as follows: \[f(i,j) = \begin{cases}\text{mod}_5 (j+1) & \text{ if } i = 0 \text{ and } 0 \le j \le 4 \text{,}\\ f(i-1,1) & \text{ if } i \ge 1 \text{ and } j = 0 \text{, and} \\ f(i-1, f(i,j-1)) & \text{ if } i \ge 1 \text{ and } 1 \le j \le 4. \end{cases}\] What is $f(2015,2)$? The domain of the function $f(x)=\log_{\frac12}(\log_4(\log_{\frac14}(\log_{16}(\log_{\frac1{16}}x))))$ is an interval of length $\tfrac mn$, where $m$ and $n$ are relatively prime positive integers. What is $m+n$? Let $P$ be a cubic polynomial with $P(0) = k$, $P(1) = 2k$, and $P(-1) = 3k$. What is $P(2) + P(-2)$ ? Define the function $f_1$ on the positive integers by setting $f_1(1)=1$ and if $n=p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$ is the prime factorization of $n>1$, then \[f_1(n)=(p_1+1)^{e_1-1}(p_2+1)^{e_2-1}\cdots (p_k+1)^{e_k-1}.\] For every $m\ge 2$, let $f_m(n)=f_1(f_{m-1}(n))$. For how many $N$ in the range $1\le N\le 400$ is the sequence $(f_1(N),f_2(N),f_3(N),\dots )$ unbounded? Note: A sequence of positive numbers is unbounded if for every integer $B$, there is a member of the sequence greater than $B$. Let $f(x)=ax^2+bx+c$, where $a$, $b$, and $c$ are integers. Suppose that $f(1)=0$, $50 < f(7) < 60$, $70 < f(8) < 80$, $5000k < f(100) < 5000(k+1)$ for some integer $k$. What is $k$? Let $f_{1}(x)=\sqrt{1-x}$, and for integers $n \geq 2$, let $f_{n}(x)=f_{n-1}(\sqrt{n^2 - x})$. If $N$ is the largest value of $n$ for which the domain of $f_{n}$ is nonempty, the domain of $f_{N}$ is $[c]$. What is $N+c$? Let $f(x) = 10^{10x}, g(x) = \log_{10}\left(\frac{x}{10}\right), h_1(x) = g(f(x))$, and $h_n(x) = h_1(h_{n-1}(x))$ for integers $n \geq 2$. What is the sum of the digits of $h_{2011}(1)$? Monic quadratic polynomial $P(x)$ and $Q(x)$ have the property that $P(Q(x))$ has zeros at $x=-23, -21, -17,$ and $-15$, and $Q(P(x))$ has zeros at $x=-59,-57,-51$ and $-49$. What is the sum of the minimum values of $P(x)$ and $Q(x)$? Suppose that $f(x+3)=3x^2 + 7x + 4$ and $f(x)=ax^2 + bx + c$. What is $a+b+c$? Functions $f$ and $g$ are quadratic, $g(x) = - f(100 - x)$, and the graph of $g$ contains the vertex of the graph of $f$. The four $x$-intercepts on the two graphs have $x$-coordinates $x_1$, $x_2$, $x_3$, and $x_4$, in increasing order, and $x_3 - x_2 = 150$. The value of $x_4 - x_1$ is $m + n\sqrt p$, where $m$, $n$, and $p$ are positive integers, and $p$ is not divisible by the square of any prime. What is $m + n + p$? A function $f$ has domain $[0,2]$ and range $[0,1]$. (The notation $[a,b]$ denotes $\{x:a \le x \le b \}$.) What are the domain and range, respectively, of the function $g$ defined by $g(x)=1-f(x+1)$? The function $\displaystyle f$ has the property that for each real number $\displaystyle x$ in its domain, $\displaystyle 1\/x$ is also in its domain and $f(x)+f\left(\frac{1}{x}\right)=x$ What is the largest set of real numbers that can be in the domain of $f$? For each $x$ in $[0,1]$, define \[\begin{array}{clr} f(x) & = 2x, & \text { if } 0 \leq x \leq \frac {1}{2}; \\ f(x) & = 2 - 2x, & \text { if } \frac {1}{2} < x \leq 1. \end{array}\] Let $f^{[2]}(x) = f(f(x))$, and $f^{[n + 1]}(x) = f^{[n]}(f(x))$ for each integer $n \geq 2$. For how many values of $x$ in $[0,1]$ is $f^{[2005]}(x) = \frac {1}{2}$? If $f$ is a function such that $f(f(x)) = x^2 - 1$, what is $f(f(f(f(3))))$? Let $a > 0$, and let $P(x)$ be a polynomial with integer coefficients such that $P(1) = P(3) = P(5) = P(7) = a$, and $P(2) = P(4) = P(6) = P(8) = -a$. What is the smallest possible value of $a$? Let $f(x) = x^2 + 5$, and $g(x) = 2(f(x))$. What is the greatest possible value of $f(x + 1)$ when $g(x)$ = 108? Let $f(x) = \sqrt{2^2-x^2}$. Find the value of $f(f(f(f(f(-1)))))$. Find all polynomials $f(x)$ such that $f(x^2) = f(x)f(x+1)$. Let $x, y \in [-\frac{\pi}{4}, \frac{\pi}{4}], a \in \mathbb{Z}^+$, and $$ \left\{ \begin{array}{rl} x^3 + \sin x - 2a &= 0 \\ 4y^3 +\frac{1}{2}\sin 2y +a &=0 \end{array} \right. $$ Compute the value of $\cos(x+2y)$ Let $f$ be a real-valued function such that $f(x) + 2f(\frac{2002}{x}) = 3x$ for all $x > 0$. Find $f(2)$. PolynomialAndEquation Root Function FunctionProperty FunctionEquation LogAndExp Combinatorics Trigonometry TrigInequality AMC10/12 AIME Exeter With Solutions © 2009 - 2023 Math All Star
CommonCrawl
The impacts and unintended consequences of the nationwide pricing reform for drugs and medical services in the urban public hospitals in China Xiaoxi Zhang1 na1, Hongyu Lai2 na1, Lidan Zhang3, Jiangjiang He1, Bo Fu2 & Chunlin Jin1 Since 2015, China has been rolling out the pricing reform for drugs and medical services (PRDMS) in the urban public hospitals in order to reduce drug expenditures and to relieve financial burdens of patients. This study aims at evaluating the effectiveness of the reform and investigating its positive impacts and unintended consequences to provide evidence basis for further policy making. The Difference-in-difference (DID) approach was employed to analyze the reform impacts on the 31 provincial administrative areas in China based on data abstracted from China Statistics Yearbooks and China Health Statistics Yearbooks from 2012 to 2018. The reform resulted in a decrease of 7.59% in drug cost per outpatient visit, a decrease of 5.73% in drug cost per inpatient admission, a decrease of 3.63% in total cost per outpatient visit and an increase of 9.10% in surgery cost per inpatient admission in the intervention group. However, no significant change in examination cost was found. The reduction in the medical cost per inpatient admission was not yet demonstrated, nor was that in the total outpatient/ inpatient expenses. The nationwide pricing reform for drugs and medical services in urban public hospitals (PRDMS-U) in China is demonstrated to be effective in cutting down the drug expenditures. However, the revealed unintended consequences indicate that there are still significant challenges for the reform to reach its ultimate goal of curbing the medical expenditures. We conclude that the pricing reform alone may not be enough to change the profit-driven behavior of medical service providers as the root cause lies in the unchanged incentive scheme for providers in the service delivery. This holds lessons for policy making of other low- and middle-income countries (LMICs) with similar health systems set up in the achievement of Universal Health Coverage (UHC). Over the past 70 years, China's health system has undergone vast changes under the profound impacts of the country's economic reform [1]. In the early years since the People's Republic of China (PRC) was founded in 1949, the Chinese economy was dominated by central planning and the government took complete charge of its health system [1, 2]. At that time, the government decided on the allocation of health resources and directed the health financing and service delivery. All the health facilities that provided health care, such as hospitals, were solely owned, financed and operated by the government [2]. The government devoted to improving the equity in health service use and impressive improvement in the whole population's health outcomes were also achieved with only limited health resources [3]. However, the centrally controlled economy led to vast inefficiency and poverty, so China embarked on its economic reform in 1978. Since then, the market force had been performing an increasingly important role in the economy, which also led to the marketization of the health sector in the country [4, 5]. Thereafter, a series of policy interventions were staged to strengthen the market force in the health sector, including the decentralization of public hospital management [6]. As such, the government ceased to fully subsidize the public hospitals so that the public hospitals had to undertake responsibilities for their own profits and losses. At the same time, the government promoted the public hospitals' autonomy by allowing them to self-manage and determine the pricing of medical services and drugs. The subsidies from the government to public hospital shrank sharply from more than 60% of its total revenue by 1980 to less than 25% by 2008 [7]. That is, instead of relying on the government to finance as before, the public hospitals had to make profits from the drugs and services provided to finance themselves [8, 9]. Consequently, government subsidies, health services and drug sales became the three main sources of hospital's revenues. In 2012, over 40% of the hospital's revenues came from drug sales while only approximately 10% came from governmental subsidies [10]. In order to obtain the profit margin, the drugs were allowed to be priced with up to 15% mark-up on the actual purchase price [11]. Moreover, an incentive scheme was introduced to link the physicians' merit pay, that is a major part of their income, to the hospital's profits, which would encourage them to prescribe more profitable drug or service [12, 13]. Unlike the successes in the economic reform [14], the marketization of the health sector in China has experienced severe challenges. Once the for-profit management scheme of the public hospital had been established, the motivation of profit-seeking became perverse among health care providers, which led to a significant increase in the revenue of public hospitals and brought about substantial negative impacts [15]. The health care providers are motivated to induce the demand of patients and over-prescribe drugs and diagnostic tests, which resulted in the alarming escalation of health expenditure [16]. From 2007 to 2012, the growth rate of health expenditure (14.9%) far exceeded that of gross domestic product (GDP) (10.2%) [17]. In 2012, the drug expenses accounted for over 50% of the total medical expenditure per outpatient visit and over 40% per inpatient admission [13]. Not only that, extensive over-prescription gave rise to the occurrence of microbial resistance and false-positives diagnostic tests, threatening the quality of health care [18, 19]. Thereupon, complaints from Chinese people on the difficulties of affording quality health care prevailed [20], which were frequently referred to as the lament of "kanbingnan, kanbinggui" or "insurmountable access barriers to health care, insurmountable high health costs" [1]. The outbreak of the SARS epidemic in 2003 further intensified people's dissatisfaction and thereby the pressing necessities to reexamine the health system [21], which eventually led to the launch of the 2009 reform [22]. With the goal of "everyone has affordable access to basic health care", the equivalent of $230 billion was committed heavily to the reform between 2009 and 2011 [23]. After several years of efforts, some significant achievements were made, especially in improving health insurance coverage [24, 25]. However, the reform to the public hospitals failed to yield any encouraging progress [1, 26]. In China, public hospitals, which are capable to provide over 80% of the overall inpatient and outpatient services, play the most important role in health care delivery [13]. Therefore, suitable intervention strategies introduced into public hospitals are of crucial importance in the process of an effective health system reform [27], where the aim is to change the profiting scheme of public medical service providers by reemphasizing their mission of improving public welfare instead of earning incomes [28]. Among the various interventions in public hospitals, the pricing reform is regarded as the most substantial instrument, with the core measure as the zero drug mark-up policy, which is to eliminate the up to 15% profit margin that was previously allowed to be added on the actual drug purchase price. In order to ensure the sustainability of the intervention in drug pricing, the pricing of medical services was also adjusted, including raising surgical fee and reducing laboratory fee [29, 30]. The primary aim of the pricing reform for drugs and medical services (PRDMS) is to reduce drug expenditures and thereby to reduce medical expenditures and financial burdens of patients. Meanwhile, by cutting off the economic linkage between drug sales and drug use, the policy also intends to rectify physicians' behavior in service provision, so as to contribute to the improvement of the quality and accessibility of health care [31]. For the sake of smooth and stable implementation, the government has adopted a step-by-step strategy to push forward the pricing reform. Before intervening into urban public hospitals, the policy had been put into effect in every county-level public hospital (PRDMS-C) by 2015 [32]. Based on the lessons from the implementation in county-level hospitals, some provinces, like Zhejiang and Anhui, took the lead to launch the reform in urban public hospitals. Subsequently, the pricing reform for drugs and medical services in the urban public hospitals (PRDMS-U) has been able to roll out in every public hospital throughout the country as of September, 2017 [33]. A few existing studies have been conducted to evaluate the impacts of PRDMS-C, after it took effect in different areas in China, such as Sanming [34], Zhejiang [35], Hubei [26, 36], Guangxi [37], etc. Most of the studies showed that the reform reduced drug cost whereas its effectiveness in containing medical expenditures was questionable with some unintended consequences [38,39,40,41,42,43]. For example, through the DID approach, Fu et al. [34] analyzed the public hospital reform in Sanming and showed that the Sanming model was able to reduce drug cost and total medical expenditures without measurably sacrificing the quality or the efficiency of health service provision. It affirmed the effectiveness of the reform in Sanming due to its systematic design and forceful implementation of the policy interventions and justified the nationwide promotion of Sanming model. Using a retrospective pre/post-reform design, Zhang et al. [35] analyzed the questionnaire data from selected county-level public hospitals in Zhejiang from 2011 to 2012 and concluded with a decrease of the supplier-induced demand in drugs but an increase in medical services. Besides, in a study conducted in Hubei, Zhang et al. [26] found that the decrease of drug costs resulted from the reform did not lead to the reduction of personal health spending. As for the nationwide evaluation of the PRDMS-C, Fu et al. [41] conducted a sample investigation to 1880 county-level hospitals across the country and found that the policy resulted in a reduction in drug expenditures together with an increase in diagnostic tests expenditures, which had not measurably contributed to the containing of total health expenditures. After the reform was completed in county-level hospitals, it is now fully practiced in urban public hospitals in China. Compared with county-level hospitals, the service volumes of urban public hospitals are usually much larger, and the medical services provided are generally more advanced and comprehensive, hence the impacts of reform in urban public hospitals would be even more substantial. Despite some previous studies on the effects of PRDMS-C, the fundamental differences between county-level and urban hospitals limited the generalizability of conclusions of those previous studies on county-level hospitals to urban ones. Although several literatures presented preliminary evaluations in urban cities like Nanjing [42], Beijing [43], etc., the conclusions from these studies can hardly reflect country-level effects of the reform in general as the evidence from the selected locations can hardly be generalized to other areas with different economic and health development background. In China, a reasonable fee schedule for drugs and medical services has yet been well-established. Detecting the positive influences and unintended consequences of the nationwide pricing reform in urban public hospitals, the most influential player in health service provision in China, is urgent for policy makers to draw lessons from. Therefore, a nationwide impact evaluation of the PRDMS-U with improved methods might be in sore need to provide some empirical evidence to inform further policy-making. The objectives of the PRDMS-U include four aspects. Firstly, it aims to reduce drug costs through the elimination of the drug mark-up. Secondly, it intends to adjust the cost structure by meanwhile increasing surgical fee and decreasing examination fee. Thirdly, it endeavors to contribute to the reduction of the total medical expenditure. At last, it attempts to rectify the supplier-induced demand and to improve accessibility of medical service for people. The hypothesis of our study is that, the policy has almost achieved the first and the second objectives but not yet realized the third and the fourth ones. Data sources and variables selection We analyze macroeconomic data of 31 provinces/ municipalities collected from China Health Statistics Yearbooks 2012–2018 and China Statistics Yearbooks 2012–2018. The nationwide PRDMS-U was initiated in five provinces in 2015 and then extended to another 14 provinces in 2016. At the end of 2017, all the other 12 provinces were required by the national authority to roll out the reform although some of them were not able to implement the reform until early 2018. Both the timing and the impact of the PRDMS-U vary across provinces, which makes the PRDMS-U be as a "natural experiment". Given this, we take 2016 as the cut-off point, dividing the observation time into the pilot period (2015–2016) and the non-pilot period (2017–2018). Hence, we define the 19 provinces (Anhui, Fujian, Hebei, Heilongjiang, Hunan, Nei Mongol, Jiangsu, Jiangxi, Liaoning, Shaanxi, Shandong, Shanghai, Tianjin, Zhejiang, Guizhou, Qinghai, Sichuan, Xinjiang, and Yunnan) that initiated the PRDMS-U in the pilot period as the intervention group, while the other provinces (Beijing, Chongqing, Guangdong, Guangxi, Hainan, Henan, Hubei, Jilin, Shanxi, Tibet, Gansu, and Ningxia) are defined as the control group. The idea of grouping based on the timing of policy initiation in DID analysis has been widely used in economics literatures [44,45,46,47]. To test the hypothesis, we select several expenditure-related variables to measure the effects of the PRDMS-U, which are total outpatient expenditure, total inpatient expenditure, total expenditure per outpatient visit, total expenditure per inpatient admission, the drug cost per outpatient visit, the examination cost per outpatient visit, the drug cost per inpatient admission, the examination cost per inpatient admission, and the surgical cost per inpatient admission respectively. Following Fu et al. [34], we also include per capita GDP, per capita public budget revenue, and the ratio of primary industry production to GDP in the analysis as control variables. All the expenditure-related variables are adjusted by 2010 yuan (CN¥) using the CPI and all the variables are estimated in logarithms in this study. Our empirical strategy is to compare the pre- and post-reform changes between the intervention and the control groups that were both impacted by PRDMS-U. We employ the difference-in-difference (DID) method to evaluate effectiveness of PRDMS-U by using the panel data from 31 provinces/ municipalities during the year period 2012–2018 in China. The basic model (1) is as follows: $$ {Y}_{pt}=\beta \bullet {Intervention}_p\bullet postPRDM{SU}_t+\boldsymbol{\delta} \bullet {\boldsymbol{Control}}_{\boldsymbol{pt}}+{\alpha}_p+{\gamma}_t+{\varepsilon}_{pt} $$ where Ypt denotes the outcome variables for the p-th province at the t-th year; the dummy variable Interventionp equals 1 if the p-th province belongs to the intervention group and 0 otherwise; the dummy variable postPRDMSt equals 1 if the province implemented the PRDMS-U after the t-th year; Controlpt is a vector of control variables to control unobservable factors; the variable αp represents the fixed effect used to control those unobserved time-invariant characteristics of the p-th province that may affect the outcome variable; the variable γt represents the fixed effect used to control the impact of some nation-wide shocks that occur in the t-th year; the term εpt refers to a random error term; the parameter of interest in the difference-in-differences model is the interaction term β between Interventionp and postPRDMSt; and δ is a corresponding vector of coefficients for the control variables. Comparing the pre-reform trends for the intervention and control group The difference-in-differences estimator \( \hat{\beta} \) is consistent only if differences in outcome medical expenditures between the intervention and the control groups remain constant. Therefore, unparalleled differences derived from the preexisting difference between two groups would bring a potential challenge to the difference-in-differences strategy. To address this problem, we replace the first term in the right side of model (1) by βt ∙ Interventionp ∙ pre2015t ∙ Yeart, where the pre2015t equals 1 if the year is before 2015 and Yeart is a vector of year dummy variables. The coefficient βt describes the differential change in medical expenditures between the two groups in year t before the PRDMS-U. The nationwide PRDMS-U initiated in five provinces in 2015 was then extended to the whole country, hence, annual treatment effects βt before 2015 can be used to verify the parallel trends. Model (2) is as follows: $$ {Y}_{pt}={\beta}_t\bullet {Intervention}_p\bullet pre{2015}_t\bullet {\boldsymbol{Year}}_t+\boldsymbol{\delta} \bullet {\boldsymbol{Control}}_{\boldsymbol{pt}}+{\alpha}_p+{\gamma}_t+{\varepsilon}_{pt} $$ Robustness check: controlling for preexisting time trends Both intervention and control groups may have an increasing trend in medical expenditures after the PRDMS-U because of preexisting time trends or price rigidity, causing the underestimation of the effects of the PRDMS-U in the DID analysis. We extend the model (1) by including an additional term of the time trend T to control the potential time trends from pre-reform period and get model (3) as follows: $$ {Y}_{pt}={\beta}_t\cdot {Intervention}_p\cdot {\boldsymbol{YEARpostPRDMSU}}_{\boldsymbol{t}}+\boldsymbol{\delta} \cdot {\boldsymbol{Controal}}_{\boldsymbol{pt}}+\varphi \cdot {Intervention}_p\cdot \mathrm{T}+{\alpha}_p+{\gamma}_t+{\varepsilon}_{pt} $$ where βt presents the annual reform effects of the PRDMS-U in year t after the PRDMS-U and γt indicates year fixed effects controlling for preexisting time trends φ ∙ Interventionp ∙ T, where T is a vector of time variables. Table 1 shows the mean values of the observed outcome variables before (period = 0) and after (period = 1) the reform in the intervention group and the control group. The drug cost per inpatient admission experiences a sharp decrease both in the intervention group, 11.80% (Table 1, column 6), and in the control group, 16.20% (Table 1, column 3), after the reform. Table 1 Summary statistics For the intervention group, the annual growth rate of the drug cost per outpatient visit decreases from 3.34% to − 2.05% (Table 1 column 4–5) and that of the drug cost per inpatient admission decreases from 0.66% to − 7.05% (Table 1 column 4–5). The mean of the surgery cost per inpatient service increases from 448.63 Yuan to 565.52 Yuan (Table 1 column 4–5) after the reform in the intervention group, and the growth rate of the surgery cost per inpatient service is 26.06% (Table 1 column 6), significantly higher than 21.96% (Table 1 column 3) in the control group. These results have provided evidence in favor of our hypothesis. Moreover, in the intervention group, the annual growth rate of the total cost per outpatient visit decreases from 4.98 to 2.69% (Table 1 column 4–5) after the reform, and the annual growth rate of the total cost per inpatient admission decreases from 4.22 to 1.98% (Table 1 column 4–5) after the reform. Likewise, the annual growth rate of the total outpatient cost decreases from 11.86 to 7.52% (Table 1 column 4–5) after the reform and the total inpatient cost decreases from 11.46 to 8.14% (Table 1 column 4–5) after the reform in the intervention group. Additionally, it is noticeable that the examination cost per outpatient visit increases by 10.53% after the reform took into effect in the intervention group. Based on the t-test results (Table 1 column 7–8), the difference of the control variables between the two groups is not significant, which reveals that the two groups are not heterogeneous in terms of economic and social conditions. Besides, drug cost per inpatient admission has significant differences in period 1 between the two groups (Table 1 column 8), which is consistent with our hypothesis. In order to verify the plausibility of applying DID method (i.e., satisfying the parallel trend assumption), we compare the means of outcome variables between the intervention group and the control group for every year in Fig. 1. All the mean outcome trajectories of the intervention group remain parallel to those of the control group before 2015, although they are separate from each other for all the outcome variables except the total cost per inpatient admission, which demonstrates that there is no heterogeneity trend between the two groups. It is also seen that both the average drug cost per outpatient and the average drug cost per inpatient have a decrease appearing in 2016 and 2017 in the intervention group and the control group respectively. Additionally, the magnitude of increase in surgery cost is shown to be larger in the intervention group. The time trends of outcome variables measuring medical care cost per outpatient visit/ inpatient admission We also use model (2) to test whether there exists any unparalleled pre-reform time trend between the intervention and control group. The results are shown in Table 2, which reveal no significant differences in pre-reform trends between the intervention and the control groups for most expenditure variables except drug cost per outpatient visit and drug cost per inpatient admission. Medical care cost per outpatient visit / inpatient admission Based on the findings from our parallel trend analysis, we employ the basic DID model (1) for the outcome variables to evaluate the effectiveness of the PRDMS-U. The regression results are shown in Table 3. Table 2 Parallel trend test of outcome variables Firstly, compared with the control group, the PRDMS-U results in a decrease of 7.50% (=1 − e−0.078, p < 0.05) in drug cost per outpatient visit (Table 3, column 2) and 5.73% (=1 − e−0.059, p < 0.05) in drug cost per inpatient admission in the intervention group (Table 3, column 5), which indicates that the reform policy was effective in cutting the drug expenditure. Secondly, in the intervention group, the PRDMS-U produces a 3.63% (=1 − e−0.037, p < 0.05) decrease of the total cost per outpatient per year after the reform's implementation (Table 3, column 1), which demonstrates the effect of the policy on decreasing the total medical care cost per outpatient visit. However, the coefficient of the total cost per inpatient admission is not statistically significant (Table 3, column 4), implying that the reform effects on decreasing the total cost per inpatient admission are not yet observable. Thirdly, the coefficient of the examination cost per outpatient visit or per inpatient admission is not significant (Table 3, column 3 and 6), indicating that the reform has no significant impact on the examination cost. Additionally, the coefficient of the examination cost per inpatient admission is positive (Table 3, column 6), which implies a potential unintended consequence of increasing the examination cost. Finally, compared with the surgery cost of the control group, that of the intervention group increases by 9.10% (=e0.087 − 1, p < 0.05) after the reform (Table 3, column 7), indicating that the reform leads to an increase in surgery cost. Along with the decreased drug cost, it is shown that the reform to some extent promotes the optimization of the fee schedule for drugs and medical services in the urban public hospitals in China. Total outpatient/ inpatient expenses In order to examine the impacts of the PRDMS-U on the total expenditure, we multiply the cost per outpatient visit/ inpatient admission to the number of visits/admissions and obtain the total outpatient/ inpatient expenses as other outcome variables for our analysis. From the results of the DID analysis in Table 4, there is no evidence proving the statistically significant effects of the PRDMS-U on decreasing either the total outpatient expenses or the inpatient expenses. Nonetheless, we find that the coefficients are shown to be negative and the decreasing trend in the annual growth rate of the total expenses can be found in the intervention group from Table 1. It remains to be judged whether the reform has achieved its goals in curbing the upraise of the total medical expenses. Table 3 Impact of the reform on medical care cost per outpatient visit / inpatient admission Table 4 Impact of the reform on total outpatient/ inpatient medical care expenses Robustness checks We apply model (3) to control preexisting time trends, and the results are shown in Figures a and b in Additional file 1, which present robustness checks for drug cost per outpatient visit and drug cost per inpatient admission respectively. The y-axis plots coefficients for the year-specific effects βt and the year-fixed effects γtδt. The line for the intervention group indicates the aggregation of βt and γt, and the line for the control group indicates γt. Figure a shows that there is a difference of the year-specific coefficients of the drug cost per outpatient visits between the intervention group and the control group occurring after 2014, and the year-specific coefficients of the intervention group are less than 0 from 2012 to 2018. Figure b indicates that the year-specific coefficients of drug cost per inpatient admission are more than 0 in both two groups from 2012 to 2018, and a similar difference between the two groups occurs after 2014. The results of model (3) in Figure a and b in Additional file 1 confirm that the drug expense decreases more significantly in the intervention group than in the control group after the implementation of the PRDMS-U, even when preexisting time trends are controlled. After the launch of the PRDMS-U, all the urban public hospitals eliminated the drug mark-up and adjusted the prices of medical services; simultaneously, the drug procurement scheme and insurance payment methods were reformed to a certain extent as accompanying policies. Nevertheless, there was a variance in the scope and range of the price adjustment in different areas according to local conditions [33]. In 2015, the General Office of the State Council released the Guiding Opinions on the Pilot Comprehensive Reform of Urban Public Hospitals [48]. According to the document, local governments should be responsible for implementing the PRDMS-U; local health administrative departments should be responsible for monitoring the progress of the reform and conducting the progress evaluation for hospitals, the results of which should be substantially linked to the financial subsidies for hospitals and the appointment of hospital directors. Besides, the document also required the percentage of drug expenditure in total medical expenditure to be reduced to about 30%. To meet the requirement, while eliminating the drug mark-up, in fact, public hospitals had to adopt some circumvention measures, such as asking patients to purchase drugs in out-of-hospital pharmacies or raising total expenditures to dilute the share of drug expenditures. In 2019, the General Office of the State Council issued the Opinions on Strengthening the Performance Evaluation of Tertiary Public Hospitals [49], in which the requirement for the drug expenditure share had been cancelled. Based on the conclusions from existing literature, the effects of the pricing reform in public hospitals varied in different scenarios. Fu. et al. [34] evaluated the pilot reforms of public hospitals in Sanming, where the reform achieved tremendous success in reducing drug and medical expenditures, and attributed the effect of the reform there to its substantial alignment of the price adjustment with the reform in the governance structure, payment method and physician compensation scheme. Whereas, some negative impacts of the reform were claimed in more studies conducted elsewhere [35,36,37,38,39,40,41,42,43]. Tang et al. analyzed antibiotic uses after the reform in Hubei and found that the reform contributed to an increase in the injection of antibiotics, as the hospitals attempted to profit from drug-associated services, such as injections, after the zero drug mark-up policy [36]. Jiang et al. evaluated the reform in Guangxi and suggested that the reform contributed little to the operation efficiency of hospitals and negatively affected clinical quality [37]. Our study contributes to the evidence of the nationwide evaluation of PRDMS-U in China. Through examining the province-level data of 31 areas in China from 2012 to 2018 with the difference-in-difference (DID) approach, our study results indicate that the implementation of the PRDMS-U, with the core measure as the zero drug mark-up policy, is associated with significant reductions in the drug expenses per inpatient admission/ outpatient visit. In other words, the results show that the policy contributes to the reduction in the drug expenditure, which suggests that the policy is on the right track and its preliminary goal has been achieved. In spite of the striking decrease in drug cost along with the measurable increase in surgical cost per inpatient admission presented, no significant change in examination cost is found, which suggests that the reform objective to adjust the fee schedule for drugs and medical services has not been fully realized despite some positive progress discovered. Moreover, the reduction in the medical cost per inpatient admission is not yet demonstrated, nor is the total outpatient/ inpatient expenses. These results indicate that, cost-shifting with supplier-induced demand occurs as physicians tend to prescribe more examinations and tests to compensate for the profit loss from drugs, which notably undermines the effectiveness of the PRDMS-U as a whole and results in the failure to reach the ultimate goal of curbing unnecessary expenditures. It is indicated that the pricing intervention alone is unable to relieve the supplier-induced demand. Essentially, despite the elimination of the drug profit margin, the compensation scheme for public hospitals and the payment scheme for physicians remain unreformed. The profits generated from drugs and services still constitute the major part of the revenue of hospitals, a proportion of which makes up the merit pay for physicians. With unaltered economic motivations of suppliers, the reform policy can barely rectify the behavior of health service providers in the concrete sense. This finding is consistent with the conclusion of the national evaluation conducted by Fu. et al. [34] on the reform in county-level hospitals, which also questioned the effects of price control over certain drugs and medical services to curb the health expenditure [41]. It has been pointed out in a number of studies [33, 35,36,37, 40,41,42,43] that the integration of policy interventions was crucial to the effects of the reform and that the piecemeal remedies of the policies could easily lead to circumvention behaviors of health service providers. The underlying issue is that current China's health system suffers from serious market failures [50]. Overuse of the market force in service delivery may cause hazards to the equity and affordability of health care [50,51,52,53,54]. Moreover, the particular characteristics of health care market, such as extensive asymmetry of information between suppliers (physicians and hospitals) and demanders (patients), have exerted uncertainty on drawing upon the path of economic reform for health care [16, 55, 56]. Once the for-profit motives get deeply entrenched, the suppliers are prone to induce the demand and push up the price of some profitable drugs or services [57, 58]. Regrettably, the uniqueness of health care market has not been identified thoroughly and the strategy for enterprise management in China's economic reform has been simply carried over into reforming public hospitals [1]. Interventions from comprehensive scopes should be aligned appropriately to confront the unintended consequences of the PRDMS-U. Above all, from the macroscopic perspective, the role of the government in the health service system should be strengthened to rectify past mistakes [59] in over marketization, including the over-decentralization in the management and development of public hospitals. The government should consider increasing financial subsidies to public hospitals so as to impose greater influence on its economic operation. Moreover, it is of critical importance for the government to be forceful in systematically integrating the policy measures to avoid circumvention behaviors from service providers as a result of the fragmentation and incoordination of governance. In addition, from the microscopic perspective, the financial incentive mechanisms for suppliers (hospitals and physicians) should be redesigned to positively drive the practice in service provision. On one hand, the financing mechanism of public hospitals should be changed to reduce the dependence of hospital economic operations on drugs and service income. On the other hand, the incentive mechanism for medical staff in public hospitals should be reformed to delink their income from service provision, and meanwhile the public hospital's authority in using their revenue for staff merit payment should be limited [60]. Besides, a value-based pricing scheme [61, 62] for health care service should be established. This study contributes to the knowledge on the nationwide impacts of the pricing reform for drugs and medical services in the urban public hospitals (PRDMS-U). It demonstrates the effectiveness of the reform on cutting the drug expenditure despite some unintended consequences, which reassures the conclusions in some of the previous studies conducted in pilot areas. As our data were collected from the secondary routine databases, the concerns over the report biases have inevitably limited the quality of the data and caused our incapability to deepen the analysis to the micro level. Actually, we've attempted to conduct the propensity score matching (PSM) for our analysis. However, limited by the sample size, the matching process can barely be done sufficiently, which undermines the feasibility of PSM in our scenario. Considering that our study aims at investigating the macro impacts of the policy, it is assumed that some of the individual effects might be offset in the macro-aggregated data, which might be able to reduce the bias caused by the heterogeneity among individuals in the analysis. Moreover, the reason that few statistically significant difference was obtained might be due to the limited sample size and relatively short follow-up. Hence, continuous monitoring research should also be conducted so as to shed light on the long-term impacts of the reform. Additionally, our analysis focuses on evaluating the impacts of the reform in service expenditures, while further research would be needed to investigate the quality of services. Up until now, the PRDMS has been applied to all the public hospitals including county-level and urban ones, which demonstrates the determination of the government in curbing the inflation of medical expenditures and promoting the affordability of health care of people. Our study proves the effectiveness of the policy in decreasing pharmaceutical expenditures. However, the revealed unintended consequences indicate that there are still significant challenges for the reform to confront in the way ahead to reach the ultimate goal. Several potential solutions are proposed. It is evident that unintegrated policy measures are likely to cause circumvention and the pricing instrument alone should not be enough to change the behavior of providers. Therefore, the combination of interventions in the financing mechanisms for hospitals and physicians is essential. In addition, to enhance the pursuit of social benefits [63], the government should play a fundamental role in service provision and increase financial support to public hospitals. These conclusions hold lessons for other low- and middle-income countries (LMICs) who are also conducting reforms to public hospitals for the optimization of their health service delivery [64, 65]. The policy implementation is never a linear process but full of complexity, which suggests the necessity to conduct continuous monitoring of the policy impacts and perform interventions accordingly. The data that support the findings of this study are available from: National Health Commission of the People's Republic of China. 2012–2018. China Health Statistics Yearbook. Beijing: China Statistics Press. National Bureau of Statistics of China. 2012–2018. China Statistics Yearbook. http://www.stats.gov.cn/tjsj/ndsj/ (accessed 19 Oct, 2020) (in Chinese). PRC: PRDMS: pricing reform for drugs and medical services PRDMS-U: pricing reform for drugs and medical services in urban public hospitals PRDMS-C: pricing reform for drugs and medical services in county-level public hospitals DID: difference-in-difference LMICs: low- and middle-income countries UHC: Confidence Interval Yip W, Hsiao W. What drove the cycles of Chinese health system reforms? Health Systems Reform. 2015;1:52–61. PubMed Article PubMed Central Google Scholar Ramesh M, Wu X. Health policy reform in China: lessons from Asia. Soc Sci Med. 2009;68(12):2256–62. CAS PubMed Article PubMed Central Google Scholar Hesketh T, Wei XZ. Health in China. From Mao to market reform. BMJ. 1997;314:1543. Zhang H. The argument about the new health reform lines: government and market. Reform and Open-up. 2013;4:9. Korolev A. Deliberative democracy nationwide? —evaluating deliberativeness of healthcare reform in China. J Chinese Political Sci. 2014;19:151–72. Blumenthal D, Hsiao W. Privatization and its discontents—the evolving Chinese health care system. New England J Med. 2005;353:1165–70. China National Health Development Research Center. China National Health Accounts Report. Beijing: Ministry of Health; 2009. Hu S, Tang S, Liu Y, et al. Reform of how health care is paid for in China: challenges and opportunities. Lancet. 2008;372:1846–53. Tang S, Meng Q, Chen L, et al. Tackling the challenges to health equity in China. Lancet. 2008;372:1493–501. Ministry of Health. China's health statistics yearbook. Beijing: China Statistics Press; 2013. Eggleston K, Ling L, Qingyue M, et al. Health service delivery in China: a literature review. Health Econ. 2008;17:149–65. Yip W, Hsiao W, Meng Q, et al. Realignment of incentives for health-care providers in China. Lancet. 2010;375:1120–30. Ding XY. Physician compensation report 2012–2013. China Health Human Resources. 2014;5:74–5. Lin JY, Cai F, Li Z. The China miracle: development strategy and economic reform. Beijing: Chinese University Press; 2003. Ge Y, Gong S. Chinese health care reform: problems, reasons and solutions. Beijing: China Development Publishing House; 2007. Li L. The challenges of healthcare reforms in China. Public Health. 2011;125:6–8. National Bureau of Statistics of China. 2007–13. China statistical yearbook. Beijing: China Statistics Press. Li Y, Xu J, Wang F, et al. Overprescribing in China, driven by financial incentives, results in very high use of antibiotics, injections, and corticosteroids. Health Affairs. 2012;31:1075–82. He GX, VandenHof S, VanderWerf MJ, et al. Inappropriate tuberculosis treatment regimens in Chinese tuberculosis hospitals. Clin Inf. 2011;D52:e153–e6. Zhang T, Tang S, Jun G, et al. Persistent problems of access to appropriate, affordable TB services in rural China: experiences of different socio-economic groups. BMC Public Health. 2007;7:19. Yip W, Hsiao W. The Chinese health system at a crossroads. Health Affairs. 2008;27:460–8. Chen Z. Launch of the health-care reform plan in China. Lancet. 2009;373(9672):1322–4. Wu L, Sun Z, Shi Z. Public finance invests more than three trillion RMB in the health reform, Economic Information Daily. Beijing: Xinhua News Agency; 2011. Meng Q, Xu L, Zhang Y, et al. Trends in access to health services and financial protection in China between 2003 and 2011: a cross- sectional study. Lancet. 2012;379:805–14. Yip W, Hsiao W, Chen W, et al. Early appraisal of China's huge and complex health-care reforms. Lancet. 2012;379:833–42. Zhang Y, Ma Q, Chen Y, et al. Effects of public hospital reform on inpatient expenditures in rural China. Health Econ. 2017;26:421–30. Barber SL, Borowitz M, Bekedam H, et al. The hospital of the future in China: China's reform of public hospitals and trends from industrialized countries. Health Policy Plann. 2014;29:367–78. Wagstaff A, Yip W, Lindelow M et al. 2009. China's health system and its reform: a review of recent studies. Health Economics18(Suppl 2): S7–23. Ministry of health and other five Ministries. Guidance on the pilot reform of public hospitals. Beijing: China Legal Press; 2010. Mao W, Chen W. 2013. The Zero Mark-up Policy for essential medicines at primary level facilities. World Health Organization Report. https://www.who.int/health_financing/documents/Efficiency_health_systems.../en/, Accessed 10 Aug, 2019. Mao W, Chen M, Chen W, et al. Implementation of essential medicines policy at primary health care institutions. Chinese Health Resources. 2013;16(2):91–2 125. Hu HM, Wu Y, Ye CY, et al. The preliminary evaluation of county-level public hospitals reform. Chinese J Hospital Administration. 2013;29(5):329–35. Wu C, Dai T, Yang Y. Comparative study of comprehensive reform of the pharmaceutical Price in some places in China. Chinese Hospital Management. 2017;37:1–4. Fu H, Li L, Li M, et al. An evaluation of systemic reforms of public hospitals: the Sanming model in China. Health Policy Plann. 2017;32:1135–45. Zhang H, Hu H, Wu C, et al. Impact of China's public hospital reform on healthcare expenditures and utilization: a case study in ZJ Province. Plos One. 2015;10:e0143130. PubMed PubMed Central Article CAS Google Scholar Tang Y, Liu C, Liu J, et al. Effects of county public hospital reform on procurement costs and volume of antibiotics: a quasi-natural experiment in Hubei Province, China. Pharmacoeconomics. 2018;36(8):995–1004. Jiang S, Wu WM, Fang P. Evaluating the effectiveness of public hospital reform from the perspective of efficiency and quality in Guangxi, China. SpringerPlus. 2016;5(1):1922. Shi X, Zhu D, Man X et al. 2019. "The biggest reform to China's health system": did the zero-markup drug policy achieve its goal at traditional Chinese medicines county hospitals? Health Policy and Planningpii: czz053. [Epub ahead of print]. Pan J, Qin X, Hsieh CR. Is the pro-competition policy an effective solution for China's public hospital reform? Health Econ Policy Law. 2016;11(4):337–57. Zhao D, Zhang Z. Qualitative analysis of direction of public hospital reforms in China. Front Med. 2018;12(2):218–23. Fu H, Li L, Yip W. Intended and unintended impacts of price changes for drugs and medical services: evidence from China. Soc Sci Med. 2018;211:114–22. Tang W, Xie J, Lu Y, et al. Effects on the medical revenue of comprehensive pricing reform in Chinese urban public hospitals after removing drug markups: case of Nanjing. J Med Econ. 2018;21(4):326–39. Liu X, Xu J, Yuan B, et al. Containing medical expenditure: lessons from reform of Beijing public hospitals. BMJ. 2019;365:l2369. Petrick M, Zier P. Regional employment impacts of common agricultural policy measures in eastern Germany: a difference-in-differences approach. Agric Econ. 2011;42:183–93. Yang X, Jiang P, Pan Y. Does China's carbon emission trading policy have an employment double dividend and a porter effect? Energy Policy. 2020;142:111492. Yuan C, Liu Y, Wang Z, et al. The effect of the replacement of business tax by VAT on business investment, R&D and labor employment: a DID model analysis basing on Chinese listed Company's data. China Economic Studies. 2015;4:3–13. Zhang Y, Li S, Luo T, et al. The effect of emission trading policy on carbon emission reduction: evidence from an integrated study of pilot regions in China. J Clean Prod. 2020;265:121843. The General Office of the State Council. The Guiding Opinions on the Pilot Comprehensive Reform of Urban Public Hospitals. http://www.gov.cn/xinwen/2015-05/17/content_2863419.htm (Accessed 28 June 2020) (in Chinese). The General Office of the State Council. The Opinions on Strengthening the Performance Evaluation of Tertiary Public Hospitals. http://www.gov.cn/zhengce/content/2019-01/30/content_5362266.htm (Accessed 28 June 2020) (in Chinese). Stiglitz JE. Markets, market failures, and development. Am Economic Review. 1989;79:197–203. Arrow KJ. 1969. The organization of economic activity: issues pertinent to the choice of market vs. nonmarket allocation. In: The analysis and evaluation of public expenditure: the PPB system. Joint Economic Committee, 91st Cong., 1st sess., vol. 1. Washington, DC: Government Printing Office, 59-73. Ostrom V, Ostrom E. Public goods and public choices. In: McGinnis M, editor. Polycentricity and local public economies readings from the workshop in political theory and policy analysis. Ann Arbor: University of Michigan Press; 1999. p. 75–105. Hayek FA. The constitution of liberty: the definitive edition. New York: Routledge Press; 2013. Friedman M. Capitalism and freedom. Chicago: University of Chicago Press; 2002. McGuire TG. Physician agency. Handbook Health Econ. 2000;1:461–536. Folland S, Goodman AC, Stano M. The economics of health and health care. New Jersey: Prentice Hall; 2007. Reynolds L, McKee M. Serve the people or close the sale? Profit-driven overuse of injections and infusions in China's market-based healthcare system. Int J Health Plann Manag. 2011;26:449–70. Chow GC. An economic analysis of health care in China: Princeton University; 2006. Hsiao W. Correcting past health policy mistakes. Daedalus. 2014;143:53–68. Wagstaff A, Bales S. 2012. The impacts of public hospital autonomization: evidence from a quasi-natural experiment. World Bank policy research working paper. https://elibrary.worldbank.org/doi/abs/10.1596/1813-9450-6137, Accessed 11 Aug 2019. Jian W, Lu M, Chan KY, et al. Payment reform pilot in Beijing hospitals reduced expenditures and out-of-pocket payments per admission. Health Affairs. 2015;34:1745–52. Danzon P, Towse A, Mestre-Ferrandiz J. Value-based differential pricing: efficient prices for drugs in a global context. Health Econ. 2015;24:294–301. Horwitz JR. Making profits and providing care: comparing nonprofit, for-profit, and government hospitals. Health Aff. 2005;24(3):790–801 PMID: 15886174. Adam T, Hsu J, de Savigny D, et al. Evaluating health systems strengthening interventions in low-income and middle-income countries: are we asking the right questions? Health Policy Plan. 2012;27(Suppl 4):iv9–19. Preker AS, Harding A. A conceptual framework for the organizational reforms of hospitals. In: Harding A, Preker AS, editors. Innovations in health service delivery: the corporatization of public hospitals. Washington, DC: World Bank; 2003. p. 23–78. Xiaoxi Zhang, Chunlin Jin and Jiangjiang He's work was supported by Shanghai Municipal Health Commission [grant number 20194Y0310] and Shanghai Planning Office of Philosophy and Social Science [grant number 2019BGL030]. Hongyu Lai and Bo Fu's research was supported by China National Natural Science Foundation Major Program Research Project [grant number 71991471]. The funding agencies played no role in the design of the study, the collection, analysis, and interpretation of data or in writing the manuscript. Xiaoxi Zhang and Hongyu Lai contributed equally to this work. Shanghai Health Development Research Center, 1477 Beijing West Rd., Shanghai, 200040, China Xiaoxi Zhang, Jiangjiang He & Chunlin Jin School of Data Science, Fudan University, 220 Handan Rd, Shanghai, 200433, China Hongyu Lai & Bo Fu School of Management, Fudan University, 220 Handan Rd, Shanghai, 200433, China Lidan Zhang Xiaoxi Zhang Hongyu Lai Jiangjiang He Bo Fu Chunlin Jin XZ and HL are the co-first authors. BF and CJ are the co-senior authors and co-corresponding authors. Conception and study design: XZ, HL, JH, BF, CJ. Data collection: XZ, JH, CJ. Data analysis: XZ, HL, LZ, BF. Supervision: BF, CJ. Writing-original draft: XZ, HL. Writing-review & editing: XZ, HL, LZ, JH, BF, CJ. Correspondence to Bo Fu. The authors declare no conflict of interests. Additional file 1: Figure a. Annual treatment effects for drug cost per outpatient admission controlling for linear time trends. Figure b. Annual treatment effects for drug cost per inpatient visit controlling for linear time trends. Parallel trend test of outcome variables. Zhang, X., Lai, H., Zhang, L. et al. The impacts and unintended consequences of the nationwide pricing reform for drugs and medical services in the urban public hospitals in China. BMC Health Serv Res 20, 1058 (2020). https://doi.org/10.1186/s12913-020-05849-4 Pricing reform for drugs and medical services (PRDMS) Difference-in-difference (DID) Utilization, expenditure, economics and financing systems
CommonCrawl
View source for Polygon (over a monoid) ← Polygon (over a monoid) ''$R$-polygon, operand'' A non-empty set with a [[monoid]] of operators. More precisely, a non-empty set $A$ is called a left polygon over a monoid $R$ if for any $\lambda \in R$ and $a \in A$ the product $\lambda a \in A$ is defined, such that $$ \lambda (\mu a) = (\lambda\mu) a $$ and $$ 1a = a $$ for any $\lambda, \mu \in R$, $a \in A$. A right polygon is defined similarly. Specifying an $R$-polygon $A$ is equivalent to specifying a homomorphism $\phi$ from the monoid $R$ into the monoid of mappings of the set $A$ into itself that transforms 1 to the identity mapping. Here $\lambda a = b$ if and only if $$ \phi(\lambda)a = b \ . $$ In particular, each non-empty set may be considered as a polygon over the monoid of its mappings into itself. Therefore, polygons are closely related to the representation of semi-groups by transformations: cf. [[Transformation semi-group]]. If $A$ is a [[universal algebra]] whose signature $\Omega$ contains only unary operations, then $A$ can be converted into a polygon over the free monoid $F$ generated by $\Omega$ by putting $$ f_1 \cdots f_n a = f_1(\cdots(f_n(a)\cdots) $$ for any $f_i \in \Omega$, $a \in A$. If $\Omega$ is the set of input signals for an automaton having set of states $A$, then $A$ is similarly transformed into an $F$-polygon (cf. [[Automata, algebraic theory of]]). A mapping $\phi$ of an $R$-polygon $A$ into an $R$-polygon $B$ is called a homomorphism if $\phi(\lambda a) = \lambda \phi(a)$ for any $\lambda \in R$ and $a \in A$. For $A=B$ one arrives at the definition of an endomorphism. All endomorphisms of a polygon $A$ form a monoid, and $A$ can be considered as a polygon over it. An [[equivalence relation]] $\theta$ on an $R$-polygon $A$ is called a congruence if $(a,b) \in \theta$ implies $\lambda a,\lambda b) \in \theta$ for any $\lambda \in R$. The set of congruence classes of $\theta$ is naturally transformed into an $R$-polygon, called a quotient polygon of the polygon $A$ and denoted by $A/\theta$. If $A$ is a polygon over $R$, then in $R$ one can define a relation $\mathop{Ann} A$ by putting $(\lambda,\mu) \in \mathop{Ann} A$ if $\lambda a = \mu a$ for all $a \in A$. The relation $\mathop{Ann} A$ is a congruence on the monoid $R$, and $A$ is transformed in a natural fashion into a polygon over the quotient monoid $R/\mathop{Ann} A$. If the polygon $A$ arose from a certain automaton, then this transition is equivalent to identifying identically acting sequences of input signals. In universal algebra one considers the usual constructions of direct and subdirect product, but in addition in polygon theory one may consider a [[wreath product]] construction important in the algebraic theory of automata. The free product (or co-product) of polygons coincides with their disjoint union. A polygon may be regarded as a non-additive analogue of a module over a ring, which serves as a rich source of problems in the theory of polygons. In particular, a relationship has been established between polygons and radicals in semi-groups (cf. [[Radical in a class of semi-groups]]), and studies have been made on the relation between the properties of a monoid and those of polygons over them. For example, all left $R$-polygons are projective if and only if $R$ is a one-element group, while the injectivity of all polygons over a commutative monoid $R$ is equivalent to the presence in $R$ of a [[zero element]] and the generation of all its ideals by [[idempotent]]s (see [[Homological classification of rings]]). If $R$ is a monoid with zero 0, one can define an $R$-polygon with zero as a [[pointed set]] $A$ which is an $R$-polygon where the distinguished point $u \in A$ satisfies $0a=u$ for all $a\in A$. The theory of polygons with zero has some special features. Every polygon can be considered as a [[functor]] from a one-object category into the category of sets. ====References==== <table> <TR><TD valign="top">[1]</TD> <TD valign="top"> M.A. Arbib (ed.) , ''Algebraic theory of machines, languages and semigroups'' , Acad. Press (1968)</TD></TR> <TR><TD valign="top">[2]</TD> <TD valign="top"> A.H. Clifford, G.B. Preston, "Algebraic theory of semi-groups" , '''2''' , Amer. Math. Soc. (1967)</TD></TR> <TR><TD valign="top">[3]</TD> <TD valign="top"> L.A. Skornyakov, "Generalizations of modules" , ''Modules'' , '''3''' , Novosibirsk (1973) pp. 22–27 (In Russian)</TD></TR> <TR><TD valign="top">[4]</TD> <TD valign="top"> L.A. Skornyakov, A.V. Mikhalev, "Modules" ''Itogi Nauk. i Tekhn. Alg. Topol. Geom.'' , '''14''' (1976) pp. 57–190 (In Russian)</TD></TR> </table> ====Comments==== In the West, left polygons over a monoid $M$ are usually called $M$-sets; the term "operand" is also in use. The category of all $M$-sets ($M$ fixed) forms a [[topos]]; but for this it is essential not to exclude (as above) the empty $M$-set. Without assuming commutativity as above, the monoids all of whose non-empty left polygons (or, all of whose pointed left polygons) are injective have a few characterizations, which are reviewed in [[#References|[a3]]]. As noted above, there are no non-trivial monoids all of whose left polygons are projective, but the perfect monoids, defined (like perfect rings, cf. [[Perfect ring]]) by every left polygon having a projective covering, are non-trivial; see [[#References|[a1]]], [[#References|[a2]]]. ====References==== <table> <TR><TD valign="top">[a1]</TD> <TD valign="top"> J. Fountain, "Perfect semigroups" ''Proc. Edinburgh Math. Soc.'' , '''20''' (1976) pp. 87–93</TD></TR> <TR><TD valign="top">[a2]</TD> <TD valign="top"> J. Isbell, "Perfect monoids" ''Semigroup Forum'' , '''2''' (1971) pp. 95–118</TD></TR> <TR><TD valign="top">[a3]</TD> <TD valign="top"> R.W. Yoh, "Congruence relations on left canonic semigroups" ''Semigroup Forum'' (1977) pp. 175–183</TD></TR> <TR><TD valign="top">[a4]</TD> <TD valign="top"> S. Eilenberg, "Automata, languages and machines" , Acad. Press (1974)</TD></TR> </table> ====Comments==== The terms ''monoid action'' or ''monoid act'', or ''action'' of a monoid on a set are also common, as is ''act'' for the set acted on; see [[#References|[b1]]]. ====References==== <table> <TR><TD valign="top">[b1]</TD> <TD valign="top"> Mati Kilp, Ulrich Knauer, Alexander V. Mikhalev, ''Monoids, Acts and Categories: With Applications to Wreath Products and Graphs'', Walter de Gruyter (2000) ISBN 3110812908</TD></TR> </table> {{TEX|done}} Template:TEX (view source) Return to Polygon (over a monoid). Polygon (over a monoid). Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Polygon_(over_a_monoid)&oldid=39301 This article was adapted from an original article by L.A. Skornyakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article Retrieved from "https://encyclopediaofmath.org/wiki/Polygon_(over_a_monoid)"
CommonCrawl
Home | Ham Radio | IcomControl Home Page IcomProgrammmer II JRX: Virtual Ham Radio Morse Code PLSDR RadioComm Home Page Sangean ATS-909X Simple 10 MHz Frequency Standard Software-Defined Radios Share This Page A powerful Python-based software-defined radio (SDR) — All content Copyright © 2020, P. Lutus — Message Page — Latest PLSDR Version: date Most recent update to this page: Figure 1: PLSDR in operation Introduction | Acquire and Install | Development Process | Notes on Single-Sideband Design | JavaScript 3D I/Q Visualizer | Conclusion | Version History (double-click any word to see its definition) This page describes and provides a particular software-defined radio (hereafter SDR) that I wrote (open-source and free) as part of my exploration of modern radio technology. This SDR is written in Python and relies on the Gnu Radio technical infrastructure for its inner workings. This isn't my first SDR — I've written others. One example was JRX, a program that interfaced with conventional amateur radio receivers and transmitters by means of a library called HamLib. But because this field is moving so quickly, both software and hardware have changed dramatically since JRX and HamLib came on the scene. One can now acquire a radio-frequency front-end device equal in performance, but at a tiny fraction of the cost, of a conventional radio, and access it using a specialized program like PLSDR. Role of Mathematics The key to understanding SDR is to see that what once required an unchanging, inflexible electronic circuit built into a radio, now relies on software, with enormous advantages in cost and flexibility. Years ago I would change radio modes by throwing a physical switch, but I now get the same result by changing a symbolic term in a mathematical equation written in computer code. Using SDR technology I can reconfigure my "radio" to perform a new task that would have required me to warm up a soldering iron and rewire an old-fashioned radio, a task that might take a week if I had all the parts on hand — and yes, I've done just that in a long career as both hardware and software engineer. This will sound familiar to those who read my technical articles, but SDR is another example showing the intimate relationship between everyday reality and mathematics. The reason software-defined radios are more efficient and less expensive than hardware radios, is because radio technology is essentially mathematical. Old-style radios got their results by solving equations — crudely and slowly, but as well as could be expected from the technology of the time. Modern radios have much less hardware and much more software (i.e. equations), a trend that continues. Eventually we will connect an antenna directly to a computer and mathematics will do the rest. Let's compare old and new radios. Here's what a radio looked like in 1934: Figure 2: Atwater-Kent Radio (1934) (image courtesy of the Steven Johannessen Antique Radio Gallery) And here's what a radio looks like in 2018: Figure 3: RTL-SDR Radio (2018) (same scale as Figure 2, click for full-size) The pictured Atwater-Kent radio weighed about 150 pounds (68 kilos) and cost approximately US\$100 in 1934 dollars (that's about \$1800 in present-day US dollars). The RTL-SDR weighs one ounce (1/16 pound or about 1/35 kilo) and costs US\$21. But this comparison misses the fact that (when linked with appropriate software) the RTL-SDR offers 100 times more functionality than the Atwater-Kent. Picturing Radio Waves Old-time radios required only your ears. Modern radios engage your eyes as well: Figure 4: PLSDR receiving an FM broadcast The above image shows the frequency spectrum of a modern FM broadcast station. Notice the squares to the right and left of the FM station at the middle of the display (near the vertical red line). This leads to two questions: Q. What are they? A. They're called "digital sidebands." In this example they contain a digital version of the broadcast as well as specialized digital signals such as elevator music (i.e. "Muzak"), paging signals for business subscribers and other non-broadcast content. Q. Why didn't we know about this before? A. Unless you were willing to spend upward of US\$10,000 for a hardware version of the above-pictured SDR, old-technology radios couldn't show you a frequency spectrum that would reveal the details of the electromagnetic domain. Now you might spend US\$20.00 to get the the same result. Beyond what expensive old-style "spectrum analyzers" could offer, this SDR (and others like it) shows a "waterfall" record of prior electromagnetic emissions (in Figure 4, the blue-green window beneath the main spectrum display). Simply put, a modern radio delivers at least as much information to your eyes as to your ears. I hope I've conveyed to my readers how amazing modern radio technology is, and what an opportunity it presents for technical and mathematical education — and fun along the way. Acquire and Install For those in a blazing hurry and who can't be bothered reading instructions, Here are links to the old (Python 2.7, GNURadio 3.7) and new (Python 3+, GNURadio 3.8+) PLSDR versions: Old version (1.9, Python 2, GNURadio 3.7) New version (2.0+, Python 3, GNURadio 3.8+) PLSDR supports two primary platforms — Linux and Windows. As usual, the Windows installation procedure is more complicated than Linux, but I've done all I can to simplify it. Let's start with Linux. Linux Installation Open a command shell. Issue this command (assuming a Debian-derived Linux distribution): $ sudo apt-get install gnuradio gr-osmosdr python3-pyqt5 Or you may use this syntax from within the PLSDR directory: $ sudo xargs apt-get install < requirements.txt The above installs the prerequisites for PLSDR. Some radio devices require their own special drivers, a topic too complex to resolve in a finite space. For Linux distributions other than those based on Debian, I would include configuration examples but since I can't actually test them, I decided not to make them up (a general topic to which I shall return later in this article). Installation commands for other distributions should be easy enough to derive from the above. Download this ZIP archive (or, if you prefer, the GitHub release ZIP archive) of the PLSDR project and extract its contents into any convenient directory. In the directory that the ZIP extractor has created, navigate to PLSDR/scripts. Locate a shell script named create_linux_desktop_icon.sh and click it. If your system is set up in a reasonably sane way, this action will create a desktop icon for PLSDR. If not, use your own methods to run the script, which should produce the same outcome. Click the PLSDR desktop icon. If the installation has been successful, you will see the PLSDR program as shown in the images above. An alternative to the desktop icon approach is to click the Python program file PLSDR.py, which on most Linux platforms will launch PLSDR. NOTE: It is much, much easier to install and run PLSDR on Linux than on Windows. To install PLSDR and its requirements in Windows essentially means reconfiguring Windows to be more like Linux (the latter of which already has Python installed, among other things). Feel free to try to install PLSDR on Windows, but don't be misled by all the heavy lifting — the problem isn't PLSDR, it's Windows. Some of these steps require administrator authority. Recent PLSDR versions (≥ 2.0) use Python3 and a recent Gnuradio version (3.8+). Install Python 3.0+ from here. This is a necessary preliminary to all later installation steps. Most Windows 10 users with relatively new machines will want this download. Install GnuRadio. Go to GNURadio 3.8.x Win64 Binaries - Download (or a more recent download page as time passes and version numbers change) and select an appropriate download archive (they're fairly large). The above action will download a Windows executable archive named gnuradio_(version).msi, which should appear in your Downloads directory. Click the archive and follow the installation instructions. The above step installs most of the prerequisites for PLSDR (except PyQt5, see below). Some radio devices require their own special drivers, a topic too complex to resolve in a finite space, but as time passes GnuRadio and gr-osmosdr include more device drivers in their default downloads. Download this ZIP archive (or, if you prefer, the GitHub release ZIP archive) of the PLSDR project and extract its contents into any convenient directory. This is most easily accomplished by moving a copy of the downloaded ZIP archive into the root Windows directory (i.e. C:\), then right-clicking the archive and choosing "Extract All ...", then selecting "C:\PLSDR" as the destination if that's not the default. Another choice might be to locate the program directory at \users\(username)\PLSDR, but any location is fine. Now move into the newly created directory and navigate to the scripts subdirectory. At the time of writing GnuRadio doesn't include the PyQt5 library my program needs, so we need to fix this. In the above directory locate a script named install_qt5_library.bat, right-click it and choose "Run as Administrator." Since the time this article was written the GnuRadio people might have realized their mistake and included PyQt5 by default, in which case the above script will tell you this and exit. Otherwise the script will install the library that PLSDR needs to support its GUI interface. In the scripts subdirectory, locate a script named create_windows_desktop_icon.bat and click it to create a PLSDR desktop icon. Click the desktop link you've just created and, assuming the installation went well, PLSDR will launch. On many modern Windows systems, each of the prior actions will trigger a warning from your antivirus software that an unsafe progam is being launched. Ignore and if necessary override these warnings. But wait — this is Windows — there's more that needs to be done. Nothing is ever as easy on Windows as it is on Linux. Windows Driver Configuration The purpose of this step is to replace the drivers Windows dumbly thinks your SDR device needs with drivers that actually work. (Without this step, few radio devices will work on Windows with any SDR software, not just PLSDR.) Navigate into a particular GnuRadio directory, which should be located here: C:\Program Files\GnuRadio-(version)\bin In the above directory, locate a program named "Zadig-(version).exe". Run it. Plug your radio device into the computer if it's not already connected. In the running Zadig program, choose "Options ... List All Devices." This refreshes the list of devices that Zadig knows about (a drop-down list that is top center in the program). Repeat this step any time you plug or unplug a radio device. The Zadig program will list all connected USB devices, not just your radio device, so you need to identify your device in the list, which may not be so easy. For an example radio device — an RTL-SDR device such as was pictured earlier in this article — the Zadig program lists two devices unhelpfully named "Bulk-In Interface (Interface 0)" and "Bulk-In Interface (Interface 1)". Because these names are so completely uninformative, I recommend that you repeatedly plug and unplug your device, refreshing the Zadig list after each action, and see which items disappear from the list. It's the only way to be sure. Having definitely identified the radio device's ports, select one by clicking it, so it remains on display when the list is closed. The Zadig program will offer to replace the default Windows driver with one very possibly more suitable for SDR work. If your radio device already works with PLSDR, if you can already receive radio signals, then in most cases don't bother replacing the default driver. But if you cannot get your radio device to communicate with PLSDR, replace the driver. Repeat the above steps for each port of each radio device you want to use with PLSDR or another SDR program. Old PLSDR version (Python 2) Click this link to download the old PLSDR version (1.9) that runs on Python 2. This is not recommended, this link is here only for historical and archiving purposes. General Installation/Testing Notes Certain radio devices that work easily on Linux won't work very well, or at all, on Windows. One example is SDRplay/SDRUno which, although a pretty nice radio device, is closed-source and therefore inconsistent with the GnuRadio ecosystem and philosophy (and with open-source in general). I tested the four devices I own at the time of writing (A NooElec device and upconverter, an RTL-SDR device, a HackRF device, and an SDRplay device). On Linux, configuring and operating all of them was easy. On Windows, not so easy, and for readers who want the most complete and educational experience, I recommend that they install Linux, or dual-boot, or create a virtual machine ... preferably the first. On the Windows platform I could only get the SDRplay device to work with its bundled application (SDRUno), not with either GnuRadio or PLSDR — not surprising, given the overprotective stance of its maker. Antennas for computer-connected radio devices have special requirements, primarily due to the amount of electrical noise created by and near computers. The best receiving antenna is one that's wrapped in coaxial cable near the computer, and is a bare wire only at a distance. This issue is more severe at low frequencies, because (a) computers are essentially square-wave generators, and (b) square waves generate intense harmonics near their fundamental frequency: Square wave equation: \begin{equation} x(t) = \frac{4}{\pi} \sum_{k=1}^\infty \frac{\sin(2 \pi(2 k - 1)ft)}{2k-1} \end{equation} Equation (1) tells us that the spectrum of a square wave consists of odd-numbered harmonics for a substantial bandwidth above its fundamental frequency $f$. Here's a comparison of a sine wave and a square wave (the fundamental spectrum line is against the left margin): Figure 5: Comparison of sine and square wave spectra. Here's what this kind of interference looks like: Figure 6: Spectrum display of locally-generated interfering signals. It's not easy to create an antenna that receives only desired signals and rejects the noise of modern, high-speed electronic circuits, but it can be done. PLSDR Detailed Description Because many of the preliminaries have been covered above in the installation section, I'll assume you have a radio device configured and ready for connection to PLSDR. So: Connect the radio device to your computer and to an antenna. Launch PLSDR, go to the Control ... Device item and select the device with a name similar to your device. Item (2) should cause PLSDR to open communication with your device and, if the transaction/setup is successful, the Start/Stop button at the upper right will be enabled. A transaction/setup failure is indicated by an inactive Start/Stop button and the message "No radio device detected" on the program's status line. If item (2) fails to enable the Start/Stop button, this might be caused by having chosen the wrong item in the Device list, lack of a suitable installed driver, or the possibility that your device is not supported by Gnuradio's driver scheme — see the "Troubleshooting" section below for some remedies. Click the Start/Stop button to begin radio data processing. PLSDR offers a number of ways to change frequencies: Choose a frequency from the provided frequency list (at the tab marked "Frequencies" at the lower left). Spin your mouse wheel against any of the green frequency digits at the upper left. (Right-clicking a frequency digit zeroes all the digits to the right.) After clicking one of the frequency digits, one may use keyboard keys: "+" and "-" change frequency by 100 Hz. Left and right arrow keys change by 1 KHz. Up and down arrows change by 10 KHz. Page Up and Page Down change by 100 KHz. Insert and Delete change by 1 MHz. Double-click a frequency (i.e. location) on the spectrum display to center that frequency in the receiver's passband (the vertical red line). Drag your mouse cursor horizontally on the spectrum display to change frequencies or ... Drag vertically to change the position of the decibel trace. Spin your mouse wheel against the spectrum display to expand ("zoom") the frequency scale or ... While holding down the keyboard Ctrl key, spin your mouse wheel in the same way to expand the vertical decibel scale. Right-click the spectrum display to restore it to its default frequency width and decibel height. The Waterfall display (lower left tab marked "Waterfall") shows a time record of past received signals. Its displayed colors may be adjusted with the mouse wheel. The speed of the waterfall display is determined by the spectrum display's Rate FPS setting described below. The Control tab at the right provides many configuration options: For devices with more than one antenna port, the Antenna list provides a way to change ports. The Mode list includes AM, FM, WFM, USB, LSB, CW/U, and CW/L. AM stands for Amplitude Modulation which, apart from Morse code, is the oldest form of radio communication. WFM, used in FM broadcasting, is the wideband version of FM (Frequency Modulation), while the latter is used in point-to-point communications. USB and LSB stand for upper and lower sideband, variations on a very efficient communications method described more fully below. CW/U and CW/L are variations on USB and LSB with the presence of an audible tone to facilitate Morse code reception. The gain controls below the Mode list (from 1 to 4 in number) have different names and functions with different radio devices, and are defined by the radio device during initialization. The same is true for the "RF BW Hz" bandwidth control — its range and properties are defined by the radio device. The Squelch control serves a classic function in radio communications — it mutes the audio when incoming signals fall below a chosen level. Remember this control and its function if the receiver should inexplicably go mute. The Averaging control uses a moving-average algorithm to smooth out the noise in the spectrum display. It's an efficient way to reveal signals that might otherwise be buried in noise. The AGC (Automatic Gain Control) list includes various strategies to automatically control the receiver's gain. Of particular interest is SW/S, a slow-AGC mode that is useful when receiving single-sideband signals. The IF BW (Intermediate-Frequency Bandwidth) options buttons change the receiver bandwidth for different conditions. The FFT Size control changes the array size for the Fast Fourier Transform that powers the spectrum display. Larger arrays produce more detail but require more time to produce a spectrum. If the receiver's audio begins to break up or if the spectrum display seems unacceptably slow, decrease the size of the FFT array. The Rate FPS control selects the rate at which new spectra are generated. In some cases slower rates may prevent the computer from being overloaded with work. The Configure tab contains these controls: Corr. PPM applies a user-entered frequency correction, with units of parts per million, to align the receiver's local oscillator with a frequency standard. As time passes available computer receivers are equipped with more accurate clocks, so this feature has become less important. Sample Rate is the rate at which data are generated and transferred to PLSDR from the radio device. This list is populated by the radio device during its initialization. Some receivers have severe limits on data rates, others are more powerful and flexible. There is also the issue of which kind of USB port (if used) carries the data. Older USB ports have limits on data rates, entirely apart from data rates the radio device might otherwise offer. In the event of erratic or slow receiver behavior, reduce this rate. Audio Rate is the rate at which audio data are transferred from PLSDR to your computer's audio system. This setting is more critical and less flexible than the receiver's sample rate, and only a handful of rates can be expected to work. A modern laptop/desktop will support a rate of 48,000 samples per second, while older equipment may be limited to 44,100 or 22,050 samples per second. Faster rates are better, but too high a rate may produce problems of its own. The Audio Device entry is initially blank (which invokes the system default audio device), but on some Linux distributions an entry of "plughw:0,0" may (by bypassing a slow audio protocol) produce better results than the default (meaning a blank entry). But some systems use unexpected numbers for this entry — to find out what the possible numbers are, run this command in a terminal session: $ aplay -l The aplay command output lists card numbers at the left and device numbers to the right. In an entry of "plughw:x,y", x = card number and y = device number. For one system that uses HDMI sound, after some experimentation I entered "plughw:1,10" to get the desired result. If you exit PLSDR using the "Quit" button, your choice will be saved for next time. The CW Base Frequency entry is a matter primarily of interest to radio operators able to interpret on-air Morse code. It specifies the frequency of the receiver CW passband center in Hertz. PLSDR's upconversion feature automatically applies a block-upconversion frequency (with a default of 125 MHz) to receiver tuned frequencies below a specified transition frequency (with a default of 24 MHz). This simplifies the use of block upconverter devices like the "HamItUp" device. When this feature is enabled the block-upconversion frequency is automatically applied to frequencies in the specified range, but the user must still throw the upconversion switch on the HamItUp device. The "Upc. Corr. PPM" entry applies a user-specified frequency correction to the particular case of an operating upconverter. The reason for this separate correction is because the upconverter device has its own clock, separate from that in the receiver device, and this clock may require a separate correction. The Offset Tuning feature addresses a problem common to most computer receivers — they produce what is called a "DC spike," a strong pseudo-signal at zero Hertz that in many cases can interfere with reception of weak signals. The offset frequency option deals with this by applying a small offset frequency to the receiver, in a way that doesn't degrade tuning accuracy or spectrum display, but that allows the receiver to operate at full sensitivity. The entered offset may be positive or negative but cannot exceed 1/2 the audio sample rate. Frequency list PLSDR gets frequency information from user entries, but it also displays a default frequency spreadsheet that the user can edit to his own tastes. The spreadsheet is named "frequency_list.ods" and is automatically installed in the user-data PLSDR directory (not the program directory) — which, by the way, will be located here: Linux: /home/(username)/.PLSDR/frequency_list.ods Windows: /users/(username)/.PLSDR/frequency_list.ods The spreadsheet is LibreOffice compatible and contains columns of names, frequencies and comments. Although this specific spreadsheet has particular column names and related information, PLSDR will accept many variations. There's only one requirement: one of the columns must contain frequencies and a special heading that includes one of the tokens 'GHz', 'MHz', 'KHz', or 'Hz', (case insensitive) with frequency data in that column that corresponds to the provided unit. If the user prefers, PLSDR will accept a plain-text CSV (comma-separated values) file instead of a LibreOffice spreadsheet. The same format applies as above — any number of columns in any order, but one of them must contain frequencies and the special header described above. Finally, neither the spreadsheet nor the CSV file need have any particular name. The only requirement is that a user-provided spreadsheet have the ".ods" suffix, or a provided CSV file have a ".csv" suffix. If both kinds of file are in the user directory simultaneously, PLSDR will prefer the spreadsheet. Language and architecture issues Python: PLSDR is written entirely in Python, a comparatively slow interpreted language. Python development is fast (no compilation required) but the resulting programs can sometimes be slow. For this program, the consequences of writing in Python are not very severe because the Python code assembles GnuRadio code blocks written in C and C++, and those blocks then run at a much higher speed than Python would be capable of. There are a few exceptions to the above description. The spectrum display code is entirely Python, fortunately this feature doesn't require much computer power to render. One advantage to Python is that the application's running code is also the source code for the project. This is particularly convenient for open-source projects like PLSDR. Operation on a Raspberry Pi: Earlier Raspberry Pi models couldn't handle the combined PLSDR/GNURadio processing load, but according to user reports, more recent Pi models manage this workload easily. Zero-Hz Spike: Most radio devices produce a "spike," a spurious signal, at zero Hz, and most software-defined radios have a way to deal with this. The usual remedy is to move the receiver's passband away from zero Hz by a small value and realign the frequency display to make this change transparent. PLSDR offers this option for those devices that need it (details above). Upconversion: Some radio devices use an upconversion scheme to extend their frequency coverage — the HamItUp device is an example. In this scheme, below a certain frequency, an accessory upconverter block-shifts frequencies into the base device's normal acceptance window (which normally extends from 24 MHz up to 1 or 2 GHz). This allows radio reception between 0 Hz and 24 MHz. Some SDRs require that the user manually enter a conversion frequency and throw a switch to enable upconversion. PLSDR has a user-selectable feature that automatically includes the conversion frequency for tuned frequencies below a set limit. The user must still throw a hardware switch (can't automate that). Radio Device User Configuration At the time of writing the author only has four computer radios (HackRF One, NooElec, RTL-SDR and SDRplay), all of which are known to work with PLSDR simply because they were available for testing. PLSDR includes a much longer list of radio types, but for lack of hardware, no practical way to verify their operation. The longer list of untested radio names and invocation strings is derived from Internet searches, but since no direct tests have been performed, these entries are of questionable value. For radios in PLSDR's list that have not been tested, users are encouraged to edit one of PLSDR's source files to accommodate new information as it becomes available. Specifically, in the PLSDR.py file, there is a Python dictionary named "device_dict" near the top of the file. This dictionary's entries look like this: 'display name' : 'invocation string', The key to successful changes to existing entries is to enter and test a different (working) invocation string, the item at the right. For a radio not included in the default list, one may add a completely new entry as shown above — a display name and invocation string, both quoted as shown and followed by a comma. As new information becomes available, this dictionary's contents can and should be edited to reflect that new information. And for changes that work, that successfully communicate with radios, the author would appreciate hearing about these changes so others may benefit (use the message board link at the top of this page, or click here). Thanks! GnuRadio Companion Scripts The author used GnuRadio Companion (the GnuRadio design environment) a great deal during this project's development phase. The resulting scripts aren't completely synchronized with the block configuration in the project any more, but readers might find them interesting. The ZIP archive contains both transmitter and receiver modules of various kinds. PLSDR was developed using this general strategy: Each receiver mode was prototyped using a powerful GnuRadio graphic development environment named "GnuRadio Companion" (hereafter GRC). In GRC, functional blocks can be moved about and connected in different ways using a clever GUI interface, then tested in a live interactive environment. Once the prototyping phase was complete, the design was converted into PLSDR project code by importing GnuRadio and other essential libraries to support the design strategy commonly used in GnuRadio projects, then writing versions of the GnuRadio functional blocks from the prototyping phase. The design was then tested and fine-tuned in a native Python environment to determine whether the prototype scheme could be further optimized and refined. Special blocks were written to allow extraction of data from the GnuRadio block scheme to support custom-written spectrum and waterfall displays. Notes on Single-Sideband Design For lack of useful online documentation the single-sideband decoder that's now part of PLSDR required a significant effort to design, although it probably shouldn't have. But first, a little history. Single-sideband radio transmission and reception is a clever and efficient strategy to squeeze as much communications capability as possible out of a given amount of radio spectrum and transmitter power. From the early days of radio, the original voice transmission mode was AM, Amplitude Modulation, which has this spectrum (the horizontal dimension has units of frequency, not time): Figure 7: Amplitude Modulated radio signal Because the sole purpose of the signal is to transmit the information in the sidebands, it occurred to workers in the field that this purpose could be served by transmitting only one of the sidebands, eliminating the carrier and the other sideband (the carrier signal, essential for decoding the remaining sideband, is restored in the receiver). After some clever engineering work, the Single-Sideband (SSB) method was devised. A single-sideband signal looks like this: Figure 8: Single-Sideband radio signal In Figure 8 the carrier and one sideband have been eliminated (the carrier's location is marked in Figure 8 but it's not actually present), resulting in a substantial saving in transmitter power and spectrum use. When transmitting single sideband, one wants to save power and limit spectrum use by eliminating the carrier and one sideband, as in Figure 8. When receiving, one wants to completely filter out the unused sideband region of the receiver passband to limit interference from adjacent transmitters. The technical methods for transmitting and receiving single sideband signals are similar but not identical. There's a crude method to create single-sideband signals that involves use of filters — one bandpass filter for each sideband. But this method is generally unsatisfactory, slow in operation, inflexible and complex. In radio design, using a bandpass filter to create a single-sideband signal is usually an acknowledgment of defeat. In the PLSDR design phase, because of my use of Python and its limited speed, I sought the simplest, most effective and least processing-intensive sideband decoding method. I understood the theory and the mathematics, but to save time I decided to search online for methods other GnuRadio users might have posted. After several hours of research, I came to an astonishing conclusion — of the dozen or so online examples of GnuRadio single sideband creation and decoding methods, not one of them was even remotely correct. All the authors were confident in their methods, and each was entirely wrong. At the time I write this, there isn't one correct single sideband creation or decoding method posted online. Don't get me wrong — there are plenty of published single sideband methods created using GnuRadio's tools and then published in various online forums, but every single one of them is flat wrong (hover-footnote). In my favorite online example, one person posted a request for assistance with an SSB decoder design he couldn't get working, and another apparently more experienced designer explained that the beginner's design couldn't possibly work — it needed to be more complex. The more experienced hand then showed his own much more complex design which, apart from being five times more complicated than necessary, can't decode single-sideband signals any better than the method posted by the beginner (I know, I assembled and tested it). And in a surreal touch, the experienced designer fed the real component of his decoder's complex output to one stereo audio channel, and the imaginary component to the other, as though those listening to the audio would be able to distinguish between them. At this point the more technically experienced among my readers will want me to show my design, which actually works. How did I create and test it? First, a word of warning — the only meaningful validation for a single sideband receiver is to test it against a properly configured single sideband transmitter, preferably one emitting a voice signal. There are any number of online examples of decoders tested using noise sources — apparently the authors didn't realize that a noise source is very likely to confuse the test and the tester — it's common for a noise-based test signal to mix the upper and lower sidebands, then put both of them on one side of a spectrum display, creating the illusion of success. In my tests I used the GnuRadio design facility to create a single sideband transmitter, which played a voice podcast on one computer equipped with a radio signal emitter (HackRF), then tested my receiver designs against that signal source on a separate computer and radio device. I included the all-important step of deliberately switching the transmitter and receiver to opposite sidebands to verify that no signal was getting through — something not one person who posted SSB designs online apparently thought to try, or knew how. But enough history — let's discuss some theoretical issues. The key idea in modern single-sideband encoding and decoding is to take advantage of complex (in the mathematical sense) signals that have components in quadrature (meaning a two-component vector whose components differ by 90°). The conventional terminology for this kind of signal is I/Q, I (In-phase) being the real part and Q (Quadrature) being the imaginary part of a complex signal. The more advanced single-sideband designs use a phasing method, one that takes advantage of the phase relationship between the I and Q signals to mathematically suppress the unwanted sideband. Here's a classic example: Figure 9: Phasing method for sideband suppression Now think about this. If we have two waveforms that always differ by 90°, and if we know that the relationship between the components is reversed on one side of zero Hertz compared to the other (for positive frequencies, Q lags I in time by 90°, for negative frequencies, Q leads 90°), to eliminate one sideband all we need is a special device that adds 90 more degrees of phase shift to all frequencies of interest. That would produce 180° of phase shift on one side of zero Hertz, or to put this another way, the two waveforms would be in precise opposition there. Because this special phase shift is applied to only one of the two I/Q components, if the resulting components were added together the result would be zero amplitude on one sideband, and a doubling of amplitude on the other, at all frequencies. That's an ideal outcome, one that can't be approached using a bandpass filter. So what is this fantastic device that adds exactly 90° of phase shift to all frequencies and makes this all possible? It's called a Hilbert transformer. It's not possible to create an ideal Hilbert transformer (whose perfect behavior can only be described mathematically), but practical embodiments are pretty good. Again, this device has the property that it adds 90° of phase shift to all frequencies, not just one. This means it can be configured to completely eliminate one sideband of a complex, wide-bandwidth signal. But rather than try to use words to explain this, I offer this JavaScript app, which shows the process graphically: Display A: I/Q/Hilbert Display B: I/Hilbert/Output USB Frequency: Hz Figure 10: JavaScript single-sideband filter app The I (In-phase) trace is the real part of the complex incoming radio signal. The Q (Quadrature) trace is the imaginary part of the signal — it always differs by 90° from the I trace, either leading or lagging in time. For positive frequencies, Q lags I in time by 90°, as shown with the default settings above. For negative frequencies, Q leads I in time by 90° — test this by moving the frequency slider to a negative frequency. The Hilbert signal always lags Q in time by 90°, regardless of frequency. This is important to understand. The Output trace is the sum of I and Hilbert, as shown in Display B above. Because the relationship between I and Q depends on frequency, and because Hilbert always lags Q regardless of frequency: For positive frequencies, Q = I + 90° and Hilbert = Q + 90° as shown in Display A with default settings, therefore Hilbert is 180° out of phase with I, and the Output sum of the I and Hilbert signals is zero, as shown in Display B with default settings. For negative frequencies (move the frequency slider left to a negative setting), Q = I - 90° and Hilbert = Q + 90° as shown in Display A with a negative frequency setting, therefore Hilbert is exactly in phase with I, and the Output sum of the I and Hilbert signals is twice the normal signal amplitude, as shown in Display B with a negative frequency. Switching between upper and lower sidebands is accomplished by reversing the sign of the Hilbert signal that's applied to the Output adder (try it above — click the USB checkbox). In the above app, the positions of the I and Hilbert traces are slightly offset vertically so the reader can see both traces when they're in phase. This offset isn't part of the real process. After reflecting on these issues and, because of a limitation in the GnuRadio Hilbert filter design, I was obliged to create an unnecessarily complex (notes below) but effective single-sideband decoding filter that, in its final form, looks like this in GnuRadio Companion (and in PLSDR): Figure 11: GnuRadio Companion SSB decoder diagram Notes on Figure 11: Because the GnuRadio Hilbert block isn't an optimal design, this filter is more complex than necessary — I was forced to to use two Hilbert transformers where only one is actually needed. This results from the fact that this real-world embodiment of the Hilbert transformer doesn't just shift phases by 90°, it also creates a time delay, and to avoid spoiling the timing of the design, an identical time delay must be included in the other branch of the decoder. The existing Hilbert transformer design provides this essential time delay as one of its two outputs, but if the design requires two input pathways (one for I, one for Q), then the time-delay value must be acquired from a separate Hilbert transformer. So in Figure 11, one Hilbert transformer actually performs the intended 90° phase shift, while the other provides the exact time delay of the active device to keep the system synchronized. It would be nice if these two signals emanated from a single GnuRadio Hilbert block, one that accepted a complex input — that would eliminate some unnecessary complexity in this design. To me personally, a perfect GnuRadio Hilbert block would in some ways be the opposite of the existing one. The perfect Hilbert would accept a complex input, thus eliminating my "Complex to float" block at the left, and it would produce two float (i.e. non-complex) outputs, thus eliminating my "Complex to Real" and "Complex to Imag" blocks at the right. The purpose of the "Mult" block is to reverse the sign of one of the inputs to the adder at the right, which has the effect of switching from one sideband to the other. There are obviously any number of ways to create and decode single-sideband signals, since we're surrounded by radios that do this routinely, but as I write this the Internet has zero usefulness as a research resource on this topic. Again, I can't believe the number of people I located who tried to design an SSB encoder or decoder, utterly failed, then posted their results on the Web just as though they had succeeded. I realize this group has two sub-populations — those who knew their designs didn't work, and those who, because of defective testing procedures, sincerely believed their designs worked. But both groups posted their results online without hesitation. As I write this, there isn't a single working GnuRadio SSB encoder or decoder design posted online. Tomorrow there will be one. JavaScript 3D I/Q Visualizer Here's another JavaScript app (originally written for another article) that assists visualization of the complex I/Q signals used in modern radio work. AM Modulation FM Modulation Perspective 3D I/Q I Q Animate Carrier Hz: Mod. Hz: Anim. step: Anim. interval ms: State A State B State C State D Figure 12: Interactive JavaScript I/Q exploration applet Instructions and suggestions for Figure 12: Enable real-time animation by selecting the "Animate" checkbox. Rotate the 3D graph by dragging your pointing device on its surface, zoom with your mouse wheel. Try enabling/disabling the provided options and entering different frequences and rates. A set of example Configuration State buttons is provided to show various aspects of I/Q sampling: The State A example forces the 3D display into a flat, old-style 2D oscilloscope display without I/Q sampling, just to provide a contrast with the newer methods. After clicking "State A", rotate the graph with your pointing device to bring the I/Q elements back into view. The State B example shows the phase relationship between I and Q for the case of a positive frequency. In this case, Q lags I by 90° or $\frac{\pi}{2}$. The State C example shows the phase relationship between I and Q for the case of a negative frequency. Notice that the only change between this example and State B is in the phase of Q, which tells us that an old-style mixer without I/Q sampling would not be able to distinguish between a positive and negative frequency. The State D example shows a time animation of a negative frequency vector. To see the contrast between negative and positive frequency, change the sign of the carrier frequency and notice that the vector's direction of rotation reverses. Remember that the meaning of "clockwise" and "counterclockwise" depends on one's point of view. For example, the default animation appears to show a clockwise rotation for a positive frequency, but this is so only because of the default graph point of view, located to the right of the graph. But the clockwise/counterclockwise convention applies only when looking toward a future time, i.e. from a position at the left of the graph, as in the State D example. To speed up / slow down the animation rate, increase / decrease the value of "Anim. Step." To see this display in true 3D, select the "3D" checkbox and grab a pair of red/cyan "Anaglyphic" 3D glasses. It's my hope that this applet will provide a memorable visualization of the details of I/Q sampling, to help the reader acquire an intuitive sense of how and why it works. This page has two goals — introduce an advanced SDR program, and teach a little radio theory. I hope it succeeds in both efforts. I already have a list of future changes to PLSDR. One will be a way to record raw I/Q data, that shouldn't be difficult at all. Another will be a way to remotely control PLSDR using a network connection. That might be a bit more difficult, primarily having to do with choosing a rational, understandable protocol for the link. It's my hope that this program — and this page — will help people better understand mathematics and modern technology, as well as meet people's practical needs. PLSDR is © Copyright 2020, P. Lutus and is released under the GPL. PLSDR is also a GitHub project: https://github.com/lutusp/PLSDR/ 2020.04.25 Version 2.0. Now that GNURadio 3.8 is generally available, this version uses Python 3.+ and GNURadio 3.8+. While converting, fixed a few small bugs. 2018.05.22 Version 1.9. Changed default audio device to "" (which invokes the system default) after user problems with the earlier value. It turns out that the original default of "plughw:0,0" is only recognized by certain Linux distributions and causes a failure in others. 2018.04.06 Version 1.8. Substantially revised the program code and Windows launch scripts to deal with certain annoying Windows behaviors. PLDR can now be installed in any convenient location on a Windows system and it will run correctly. 2018.04.01 Version 1.7. Changed the slow-AGC time constant to a more meaningful value. 2018.03.31 Version 1.6. Changed to a more robust program path detection method based on user feedback. 2018.03.31 Version 1.5. Based on user feedback, corrected an error in the device definition dictionary "device_dict". 2018.03.30 Version 1.4. Caught and corrected a potential divide-by-zero condition. 2018.03.30 Version 1.3. Solved a sensitivity problem with the squelch feature. 2018.03.30 Version 1.2. Changed the installation procedure to reflect GitHub project conventions (added a "requirements.txt" file). 2018.03.29 Version 1.1. Changed to a power-law volume control method (many different devices with widely varying volume levels). 2018.03.29 Version 1.0. Initial Public Relase.
CommonCrawl
Basic reproduction number of SEIRS model on regular lattice Kazunori Sato , Department of Mathematical and Systems Engineering, Shizuoka University, Hamamatsu 432-8561, Japan Received: 01 March 2019 Accepted: 16 July 2019 Published: 25 July 2019 In this paper we give a basic reproduction number $\mathscr{R}_0$ of an SEIRS model on regular lattice using next-generation matrix approach. Our result is straightforward but differs from the basic reproduction numbers for various fundamental epidemic models on the regular lattice which have been shown so far. Sometimes it is caused by the difference of derivation methods for $\mathscr{R}_0$ although their threshold of infectious rates for epidemic outbreak remain the same. Then we compare our $\mathscr{R}_0$ to the ones by these previous studies from several epidemic points of view. lattice model, SEIRS, next-generation matrix, pair approximation Citation: Kazunori Sato. Basic reproduction number of SEIRS model on regular lattice[J]. Mathematical Biosciences and Engineering, 2019, 16(6): 6708-6727. doi: 10.3934/mbe.2019335 [1] O. Diekmann, H. Heesterbeek and T. Britton, Mathematical Tools for Understanding Infectious Disease Dynamics, Princeton University Press, Princeton and Oxford, 2013. [2] O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and the computation of the basic reproduction ratio R0 in models for infectious diseases in heterogeneous populations, J. Math. Biol., 28 (1990), 365–382. [3] P. van den Driessche and J. Watmough, Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission, Math. Biosci., 180 (2002), 29–48. [4] C. Castillo-Chavez, Z. Feng and W. Huang, On the computation of $\mathcal{R}_0$ and its role on global stability, in Mathematical Approaches for Emerging and Reemerging Infectious Diseases: An Introduction, Springer, New York (2002), 229–250. [5] M. Martcheva, An Introduction to Mathematical Epidemiology, Springer: New York, 2015, 16. [6] K. Saeki and A. Sasaki, The role of spatial heterogeneity in the evolution of local and global infections of viruses, PLoS Comput. Biol., 14 (2018), e1005952. [7] W. O. Kermack and A. G. Mc.Kendrick, A contributions to the mathematical theory of epidemics, Proc. R. Soc. A, 115 (1991), 700–721. [8] M. J. Keeling, The effects of local spatial structure on epidemiological invasions, P. Roy. Soc. Lond. B Bio., 266 (1999), 859–867. [9] H. Matsuda, N. Ogita, A. Sasaki, et al., Statistical mechanics of population: The lattice Lotoka-Volterra model, Prog. Theor. Phys., 88 (1992), 1035–1049. [10] C. T. Bauch, The spread of infectious diseases in spatially structured populations: An invasory pair approximation, Math. Biosci., 198 (2005), 217–237. [11] N. Ringa and C. T. Bauch, Dynamics and control of foot-and-mouth disease in endemic countries: A pair approximation model, J. Theor. Biol., 357 (2014), 150–159. [12] H. Inaba, Age-Structured Population Dynamics in Demography and Epidemiology, Springer, New York, 2017. [13] I. Z. Kiss, J. C. Miller and P. L. Simon, Mathematics of Epidemics on Networks, Cham: Springer (2017) 598. [14] P. Trapman, Reproduction numbers for epidemics on networks using pair approximation, Math. Biosci., 210 (2007), 464–489. Article views(489) PDF downloads(2006) Cited by(1) Kazunori Sato Kazunori Sato. Basic reproduction number of SEIRS model on regular lattice[J]. Mathematical Biosciences and Engineering, 2019, 16(6): 6708-6727. doi: 10.3934/mbe.2019335 Figure 1. Plot of $ \tilde{\beta} $ to give the threshold value $ \mathscr{R}_0 = 1 $ against $ \tilde{\nu} $ and $ \tilde{\omega} $ Figure 2. Plot of $ \mathscr{R}_0 $ with $ \tilde{\beta}\to \infty $ against $ \tilde{\nu} $ and $ \tilde{\omega} $ Figure 3. Four invaded S-I pairs by an introduction of only one I individual. We trace how many S-I pairs are produced by each S-I pair Figure 4. Plot of $ \mathscr{R}_0 $ against $ \tilde{\beta} $. We can evaluate $ \mathscr{R}_0 $ as the newly produced S-I pairs before dissapearing for four invaded S-I pairs. Dots, Monte Carlo simulation; Dashed line, mean-field approximation; Solid line, pair approximation
CommonCrawl
Parameterized splitting theorems and bifurcations for potential operators, Part I: Abstract theory Persistence and convergence in parabolic-parabolic chemotaxis system with logistic source on $ \mathbb{R}^{N} $ Sublacunary sets and interpolation sets for nilsequences Anh N. Le Department of Mathematics, Ohio State University, 231 W. 18th Ave., Columbus, OH 43210, USA Received August 2021 Revised October 2021 Early access November 2021 A set $ E \subset \mathbb{N} $ is an interpolation set for nilsequences if every bounded function on $ E $ can be extended to a nilsequence on $ \mathbb{N} $. Following a theorem of Strzelecki, every lacunary set is an interpolation set for nilsequences. We show that sublacunary sets are not interpolation sets for nilsequences. Here $ \{r_n: n \in \mathbb{N}\} \subset \mathbb{N} $ with $ r_1 < r_2 < \ldots $ is sublacunary if $ \lim_{n \to \infty} (\log r_n)/n = 0 $. Furthermore, we prove that the union of an interpolation set for nilsequences and a finite set is an interpolation set for nilsequences. Lastly, we provide a new class of interpolation sets for Bohr almost periodic sequences, and as a result, obtain a new example of interpolation set for $ 2 $-step nilsequences which is not an interpolation set for Bohr almost periodic sequences. Keywords: Nilsequence, interpolation set, I0-set, sublacunary set, almost periodic sequence. Mathematics Subject Classification: Primary: 37A46; Secondary: 37A44. Citation: Anh N. Le. Sublacunary sets and interpolation sets for nilsequences. Discrete & Continuous Dynamical Systems, doi: 10.3934/dcds.2021175 L. Auslander, L. Green and F. Hahn, Flows on Homogeneous Spaces, With the assistance of L. Markus and W. Massey, and an appendix by L. Greenberg. Annals of Mathematics Studies, No. 53. Princeton University Press, Princeton, N.J., 1963. Google Scholar V. Bergelson, B. Host and B. Kra, Multiple recurrence and nilsequences, Invent. Math., 160 (2005), 261–303. With an appendix by I. Ruzsa. doi: 10.1007/s00222-004-0428-6. Google Scholar V. Bergelson and A. Leibman, IPr*-recurrence and nilsystems, Adv. Math., 339 (2018), 642-656. doi: 10.1016/j.aim.2018.09.032. Google Scholar [4] L. J. Corwin and F. P. Greenleaf, Representations of Nilpotent Lie Groups and their Applications, Cambridge University Press, Cambridge, 1990. Google Scholar N. Frantzikinakis, Equidistribution of sparse sequences on nilmanifolds, J. Anal. Math., 109 (2009), 353-395. doi: 10.1007/s11854-009-0035-y. Google Scholar N. Frantzikinakis, Some open problems on multiple ergodic averages, Bull. Hellenic Math. Soc., 60 (2016), 41-90. Google Scholar N. Frantzikinakis and B. Host, Higher order Fourier analysis of multiplicative functions and applications, J. Amer. Math. Soc., 30 (2017), 67-157. doi: 10.1090/jams/857. Google Scholar N. Frantzikinakis and B. Host, The logarithmic Sarnak conjecture for ergodic weights, Ann. of Math., 187 (2018), 869-931. doi: 10.4007/annals.2018.187.3.6. Google Scholar H. Furstenberg, Poincaré recurrence and number theory, Bull. Amer. Math. Soc. (N.S.), 5 (1981), 211-234. Google Scholar C. Graham and K. Hare, Interpolation and Sidon Sets for Compact Groups, CMS Books in Mathematics. Boston, MA: Springer US, 2013. Google Scholar B. Green and T. Tao, Linear equations in primes, Ann. of Math., 171 (2010), 1753-1850. doi: 10.4007/annals.2010.171.1753. Google Scholar B. Green and T. Tao, The Möbius function is strongly orthogonal to nilsequences, Ann. of Math., 175 (2012), 541-566. doi: 10.4007/annals.2012.175.2.3. Google Scholar B. Green and T. Tao, The quantitative behaviour of polynomial orbits on nilmanifolds, Ann. of Math., 175 (2012), 465-540. doi: 10.4007/annals.2012.175.2.2. Google Scholar B. Green, T. Tao and T. Ziegler, An inverse theorem for the Gowers Us+1[N]-norm, Ann. of Math., 176 (2012), 1231-1372. doi: 10.4007/annals.2012.176.2.11. Google Scholar J. Griesmer, Special cases and equivalent forms of Katznelson's problem on recurrence, arXiv: 2108.02190. Google Scholar D. Grow, A class of I0-sets, Collog. Math., 53 (1987), 111-124. doi: 10.4064/cm-53-1-111-124. Google Scholar S. Hartman, On interpolation by almost periodic functions, Colloq. Math., 8 (1961), 99-101. doi: 10.4064/cm-8-1-99-101. Google Scholar S. Hartman and C. Ryll-Nardzewski, Almost periodic extensions of functions, Colloq. Math., 12 (1964), 23-39. doi: 10.4064/cm-12-1-23-39. Google Scholar B. Host and B. Kra, Nilpotent Structures in Ergodic Theory, volume 236 of Mathematical Surveys and Monographs, American Mathematical Society, 2018. doi: 10.1090/surv/236. Google Scholar B. Host, B. Kra and A. Maass, Variations on topological recurrence, Monatsh. Math., 179 (2016), 57-89. doi: 10.1007/s00605-015-0765-0. Google Scholar K. Kunen and W. Rudin, Lacunarity and the Bohr topology, Math. Proc. Cambridge Philos. Soc., 126 (1999), 117-137. doi: 10.1017/S030500419800317X. Google Scholar A. N. Le, Interpolation sets and nilsequences, Colloq. Math., 162 (2020), 181-199. doi: 10.4064/cm7937-9-2019. Google Scholar A. N. Le, Nilsequences and multiple correlations along subsequences, Ergodic Theory Dynam. Systems, 40 (2020), 1634-1654. doi: 10.1017/etds.2018.110. Google Scholar A. Leibman, Pointwise convergence of ergodic averages for polynomial sequences of translations on a nilmanifold, Ergodic Theory Dynam. Systems, 25 (2005), 201-113. doi: 10.1017/S0143385704000215. Google Scholar A. Leibman, Nilsequences, null-sequences, and multiple correlation sequences, Ergodic Theory Dynam. Systems, 35 (2015), 176-191. doi: 10.1017/etds.2013.36. Google Scholar A. I. Mal'cev, On a class of homogeneous spaces, Izvestiya Akad. Nauk. SSSR. Ser. Mat., 13 (1949), 9-32. Google Scholar J.-F. Méla, Approximation diophantienne et ensembles lacunaires, Bull. Soc. Math. France Mem., 19 (1969), 26-54. doi: 10.24033/msmf.21. Google Scholar C. Ryll-Nardzewski, Concerning almost periodic extensions of functions, Colloq. Math., 12 (1964), 235-237. doi: 10.4064/cm-12-2-235-237. Google Scholar P. Sarnak, Mobius randomness and dynamics, Not. S. Afr. Math. Soc., 43 (2012), 89-97. Google Scholar E. Strzelecki, On a problem of interpolation by periodic and almost periodic functions, Colloq. Math., 11 (1963), 91-99. doi: 10.4064/cm-11-1-91-99. Google Scholar T. Tao and J. Teräväinen, The structure of logarithmically averaged correlations of multiplicative functions, with applications to the Chowla and Elliott conjectures, Duke Math. J., 168 (2019), 1977-2027. doi: 10.1215/00127094-2019-0002. Google Scholar Ji-Woong Jang, Young-Sik Kim, Sang-Hyo Kim. New design of quaternary LCZ and ZCZ sequence set from binary LCZ and ZCZ sequence set. Advances in Mathematics of Communications, 2009, 3 (2) : 115-124. doi: 10.3934/amc.2009.3.115 Lan Wen. On the preperiodic set. Discrete & Continuous Dynamical Systems, 2000, 6 (1) : 237-241. doi: 10.3934/dcds.2000.6.237 François Berteloot, Tien-Cuong Dinh. The Mandelbrot set is the shadow of a Julia set. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6611-6633. doi: 10.3934/dcds.2020262 Michihiro Hirayama. Periodic probability measures are dense in the set of invariant measures. Discrete & Continuous Dynamical Systems, 2003, 9 (5) : 1185-1192. doi: 10.3934/dcds.2003.9.1185 James W. Cannon, Mark H. Meilstrup, Andreas Zastrow. The period set of a map from the Cantor set to itself. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2667-2679. doi: 10.3934/dcds.2013.33.2667 Luke G. Rogers, Alexander Teplyaev. Laplacians on the basilica Julia set. Communications on Pure & Applied Analysis, 2010, 9 (1) : 211-231. doi: 10.3934/cpaa.2010.9.211 Nancy Guelman, Jorge Iglesias, Aldo Portela. Examples of minimal set for IFSs. Discrete & Continuous Dynamical Systems, 2017, 37 (10) : 5253-5269. doi: 10.3934/dcds.2017227 Longye Wang, Gaoyuan Zhang, Hong Wen, Xiaoli Zeng. An asymmetric ZCZ sequence set with inter-subset uncorrelated property and flexible ZCZ length. Advances in Mathematics of Communications, 2018, 12 (3) : 541-552. doi: 10.3934/amc.2018032 Zhenyu Zhang, Lijia Ge, Fanxin Zeng, Guixin Xuan. Zero correlation zone sequence set with inter-group orthogonal and inter-subgroup complementary properties. Advances in Mathematics of Communications, 2015, 9 (1) : 9-21. doi: 10.3934/amc.2015.9.9 Tatiane C. Batista, Juliano S. Gonschorowski, Fábio A. Tal. Density of the set of endomorphisms with a maximizing measure supported on a periodic orbit. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3315-3326. doi: 10.3934/dcds.2015.35.3315 Yu Zhang, Tao Chen. Minimax problems for set-valued mappings with set optimization. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 327-340. doi: 10.3934/naco.2014.4.327 Sanjit Chatterjee, Chethan Kamath, Vikas Kumar. Private set-intersection with common set-up. Advances in Mathematics of Communications, 2018, 12 (1) : 17-47. doi: 10.3934/amc.2018002 Maxim Arnold, Walter Craig. On the size of the Navier - Stokes singular set. Discrete & Continuous Dynamical Systems, 2010, 28 (3) : 1165-1178. doi: 10.3934/dcds.2010.28.1165 Michel Crouzeix. The annulus as a K-spectral set. Communications on Pure & Applied Analysis, 2012, 11 (6) : 2291-2303. doi: 10.3934/cpaa.2012.11.2291 Stefanie Hittmeyer, Bernd Krauskopf, Hinke M. Osinga, Katsutoshi Shinohara. How to identify a hyperbolic set as a blender. Discrete & Continuous Dynamical Systems, 2020, 40 (12) : 6815-6836. doi: 10.3934/dcds.2020295 Yuanfen Xiao. Mean Li-Yorke chaotic set along polynomial sequence with full Hausdorff dimension for $ \beta $-transformation. Discrete & Continuous Dynamical Systems, 2021, 41 (2) : 525-536. doi: 10.3934/dcds.2020267 Roger Metzger, Carlos Arnoldo Morales Rojas, Phillipe Thieullen. Topological stability in set-valued dynamics. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1965-1975. doi: 10.3934/dcdsb.2017115 Yaotang Li, Suhua Li. Exclusion sets in the Δ-type eigenvalue inclusion set for tensors. Journal of Industrial & Management Optimization, 2019, 15 (2) : 507-516. doi: 10.3934/jimo.2018054 Rui Kuang, Xiangdong Ye. The return times set and mixing for measure preserving transformations. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 817-827. doi: 10.3934/dcds.2007.18.817 Yakov Krasnov, Alexander Kononovich, Grigory Osharovich. On a structure of the fixed point set of homogeneous maps. Discrete & Continuous Dynamical Systems - S, 2013, 6 (4) : 1017-1027. doi: 10.3934/dcdss.2013.6.1017
CommonCrawl
Methodology Article Confident difference criterion: a new Bayesian differentially expressed gene selection algorithm with applications Fang Yu1, Ming-Hui Chen2, Lynn Kuo2, Heather Talbott3 & John S. Davis4 Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387–404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783–802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes. In the past decade, high-throughput molecular technologies have gained great popularity in gene expression profiling due to their capability of producing thousands of measurements for each of the assayed samples. The microarray technology and next-generation sequencing are two widely used high-throughput technologies. Next-generation sequencing improves upon Sanger dideoxy sequencing so that the number of sequencing reactions in a single run can be in millions. For example, in Nature (2008), Bentley et al. [4] and Wang et al. [34] reported the DNA sequence of a Nigerian individual and an Asian individual, respectively. Ley et al. [18] analyzed the genome sequence of a tumor sample. One common scientific question addressed by these high-throughput experiments is to identify the genes with differential expression between two biological conditions. Although the high-throughput technologies offer us rich biological information, they are highly error-prone because many genes are monitored at the same time with a relatively small sample size. Bayesian methods provide a good solution to this problem because they synthesize all the data by borrowing information across different genes and produce more efficient estimators for evaluating the gene expressions. They include linear models in LIMMA [28] where empirical Bayesian methods were used to obtain stable results even with small sample size. A more detailed description of the Bayesian statistical methods for microarray studies can be found in Dudoit et al. [7], Pan [25], and Kuo et al. [15]. Other Bayesian methods for RNA-Seq studies using next generation sequencing were reviewed by Kvam et al. [16] and Soneson and Delorenzi [29]. Yu et al. [36] pointed out that most statistical methods for microarray studies examined the differential expressions by testing on the equality of means of the log-transformed intensities between the treatment and control, which may not be appropriate for data with complex structures (for example, a mixture normal distributions with multiple modes). They proposed a calibrated Bayes factor (CBF) method to evaluate the ratio of the full data marginal likelihood under the alternative hypothesis that a gene is differentially expressed (DE) relative to that for the null hypothesis that a gene is equivalently expressed (EE) between two biological conditions. Although their approach has the potential for handling data with more complicated distributions, the computational cost of their method may increase greatly with the complexity of the model. Chen et al. [6] employed a class of mixture models with two components to fit the microarray data with two biological conditions. To evaluate the differential expressions for each gene, they proposed a gene selection algorithm, namely the two-criterion method. Specifically, they calculated a posterior probability that there is at least a two-fold change between the mean values of raw intensities under the two considered conditions. Then a gene is declared to be DE if the resulting posterior probability is large (say at least 0.7). Since the posterior probability is readily available once a Markov chain Monte Carlo sample is drawn from the posterior distribution, the gene selection algorithm proposed by them is quite easy to implement and computationally inexpensive. However, their approach does not consider general data distributions as that in the Bayes factor approach given by Yu et al. [36]. Assuming that the data under each biological condition follow a log-normal distribution as in [6], the mean value of raw intensities equals to exp(mean+variance/2) under each condition. Thus, the two-criterion method proposed by Chen et al. [6] that calculates the ratio of two means of the raw intensities depends on not only the difference between the two transformed means but also the difference between their variances. So, when the differences between the means and the differences between the variances are in opposite directions, the Chen et al. method may not be able to detect DE genes. Additionally, their paper neither provides a guidance on controlling the false discovery rate (FDR) nor carries out the performance comparison with other existing methods. Our goal in this paper is to develop a simple but efficient gene selection algorithm so that it is not only computationally efficient, but also flexible in handling data with a complicated distribution as in Yu et al. [36]. We redevelop the two-criterion method proposed by Chen et al. [6] and construct two new gene selection algorithms for general Bayesian models. One is based on the differences between means and the other is based on both mean differences and variance differences. To differentiate the method proposed in Chen et al. [6], we name our methods as confident difference criterion methods and the two proposed confident difference criterion methods in this paper as Methods I and II. We show that the Method I, which compares the mean expressions from different conditions, is equivalent to the calibrated Bayes factor approach [36] when the raw intensities from two different biological conditions follow log-normal distributions with equal variance. We also address the multiple comparisons issue with a control of the false discovery rate. We further apply the proposed method to carry out analyses of microarray data with more than two conditions as well as sequence-based RNA data. Model for microarray data We assume that the data, denoted by D obs , have already been preprocessed with appropriate transformation and normalization. Let T be the total number of biological conditions in the study. The data may contain two biological conditions (T=2) or multiple biological conditions (T>2). The common analytical objective is to detect differentially expressed (DE) genes across different biological conditions. Let x gtk denote the preprocessed expression intensity of the g th gene in the k th sample under the t th condition for t=1,⋯,T. There are a total of G genes with sample size n gt under condition t. Thus, the data on gene g under each condition can be summarized using a vector: \(\mathbf {X}_{\textit {gt}}=(x_{gt1},\ldots,x_{gtn_{\textit {gt}}})\phantom {\dot {i}\!}\). We assume that the intensity, x gtk , k=1,⋯,n gt , t=1,2,⋯,T, follows a normal distribution \(\mathcal {N}\left (\mu _{\textit {gt}}, \sigma ^{2}_{\textit {gt}}\right)\) independently. The parameters μ gt and \(\sigma ^{2}_{\textit {gt}}\) denote the mean and variance of the intensities of gene g under condition t, respectively. We write the mean intensities as μ gt = μ g + δ gt /T, t = 1,…,T, where \(\sum ^{T}_{t=1}\delta _{\textit {gt}}\,=\,0\). For simplicity, we set \(\delta _{g1}=-\sum ^{T}_{t=2} \delta _{\textit {gt}}\) under the first biological condition. We note that μ g defines the overall mean of the intensities across all biological conditions, and δ gt /T measures the difference in the mean intensity under biological condition t from the overall mean. In a microarray study with two biological conditions (T=2), the mean intensities μ g1 and μ g2 are written as μ g1=μ g −δ g /2 and μ g2=μ g +δ g /2, respectively. When a gene is DE, we expect that the distributions of the data differ at least under two biological conditions. Hierarchial prior distributions Noninformative conditionally conjugate priors are specified for all parameters. Specifically, we assume that the mean parameters \(\mu _{g} \sim \mathcal {N}(0,\tau ^{2})\) and \(\delta _{\textit {gt}} \sim \mathcal {N} (0, \omega ^{2})\) for t>1 and any g, and the variance parameters \(\sigma ^{2}_{\textit {gt}} \sim \mathcal {IG}(a_{t},b_{t})\). We set the variance parameters τ 2 and ω 2 in the normal priors to be 100 to obtain relatively noninformative priors. The shape parameter a t in the inverse gamma prior is set to be 2, so that the prior mean of \(\sigma _{\textit {gt}}^{2}\) equals b t . We further let the scale parameter b t follow a conditionally conjugate gamma prior with \(b_{t} \sim \mathcal {G}(c,d)\), where the hyperparameter c is specified as 1 and the hyperparameter \(d \sim \mathcal {IG}(a_{d},b_{d})\), in which a d and b d are both set to be 0.01 in the simulation study and 1 in the real data analysis. Our hierarchical priors for the variance parameters, which are often difficult to estimate, allow for borrowing the information across genes via \(b_{t} \sim \mathcal {G}(c,d)\) as well as biological conditions via \(d \sim \mathcal {IG}(a_{d},b_{d})\). We intend to specify a noninformative inverse-gamma prior for the parameter d. The value of "1" was specified for both a d and b d in the real data analysis since the real data had a smaller sample size than the simulated data in the simulation study. These values of the hyperparameters still led to noninformative priors since the prior mean and variance of d do not exist. However, these values allowed us to borrow a little but not too much information across different biological conditions under comparison. Conditional posterior distributions Let \(\bar {x}_{\textit {gt}}\) denote the average intensities of gene g under condition t and also let the vector \(\bar {\mathbf {X}}_{g}=\{\bar {x}_{g1},\cdots,\bar {x}_{\textit {gT}}\}\) denote the average intensities for gene g. Then, \(\bar {\mathbf {X}}_{g}\) follows a multivariate normal distribution with \(\bar {\mathbf {X}}_{g} \sim \mathcal {N}(\mathbf {A}\mathbf {\Theta }_{g},\mathbf {\Sigma }_{g})\), where Θ g =(μ g ,δ g2,⋯,δ gT )′ is a column vector of size T, and Σ g is a diagonal matrix of size T×T with the t th diagonal element \((\mathbf {\Sigma }_{g})_{t,t}=\sigma ^{2}_{\textit {gt}}/n_{\textit {gt}}\). Here A is a T×T matrix, in which all elements in the first column equals one, i.e., A t1=1 for t=1,⋯,T, all but the first element in the first row equals −1/T, i.e., A 1t =−1/T for t=2,⋯,T, all but the first diagonal element equals 1/T, i.e., A t,t =1/T for t=2,⋯,T, and all other elements equal zero. Since the parameters in Θ g independently follow normal prior distributions, then \(\mathbf {\Theta }_{g} \sim \mathcal {N}(\mathbf {0},\mathbf {\Sigma }_{0})\), where 0 is a vector of size T containing all zero's and Σ 0 is a diagonal matrix with the first diagonal element (Σ 0)1,1=τ 2 and all other diagonal elements equal to ω 2, i.e., (Σ 0) t,t =ω 2 for t=2,⋯,T. Therefore, the conditional posterior distribution of Θ g is a multivariate normal distribution with Θ g ∼N(U g ,B g ), where the inverse of the variance matrix \(\mathbf {B}_{g}^{-1}=\left (\mathbf {A}'\mathbf {\Sigma }_{g}^{-1} \mathbf {A}+\mathbf {\Sigma }_{0}^{-1}\right)\), and the mean vector \(\mathbf {U}_{g}= \mathbf {B}_{g} \mathbf {A}' \mathbf {\Sigma }_{g}^{-1} \bar {\mathbf {X}}_{g}\). The conditional posterior distribution of the variance parameter \(\sigma _{\textit {gt}}^{2}\) is an inverse-gamma distribution with \(\sigma _{\textit {gt}}^{2} \sim \mathcal {IG}\left (a_{t}+\frac {1}{2}n_{\textit {gt}}, b_{t}+\frac {1}{2}\sum _{k=1}^{n_{\textit {gt}}}\left (x_{\textit {gtk}}-\mu _{\textit {gt}}\right)^{2}\right)\). The conditional posterior density of the hyperparameter b t is given by \(g\left (b_{t}|\sigma _{1t}^{2},\cdots, \sigma _{\textit {Gt}}\right) \propto b_{t}^{(\mathrm {G} a_{t}/2+c)}\exp \left (-\frac {b_{t}}{2}\sum {\sigma _{\textit {gt}}^{-2}}\right)\times \left (\sum _{t'\ne t} b_{t}'+b_{t}+b_{d}\right)^{\mathrm {T} c+a_{d}}\). Consequently, we can apply the Gibbs sampling algorithm to sample the parameters b t , \(\sigma ^{2}_{\textit {gt}}\) and Θ g in turn from their respective conditional posterior distributions using the following steps: (1) sample b t for each condition t from its posterior density function \(g(b_{t}|\sigma _{1t}^{2},\cdots, \sigma _{\textit {Gt}})\) via the Metropolis-Hastings algorithm; (2) sample \(\sigma ^{2}_{\textit {gt}}\) given b t and μ gt for each g and t from its inverse gamma posterior distribution with updated parameters \(a_{t}+\frac {1}{2}n_{\textit {gt}}\) and \(b_{t}+\frac {1}{2}\sum _{k=1}^{n_{\textit {gt}}}(x_{\textit {gtk}}-\mu _{\textit {gt}})^{2}\); (3) sample Θ g given \(\sigma ^{2}_{\textit {gt}}\) for all g from their conditional multivariate normal posterior distribution, and calculate the μ gt based on the sampled values of Θ g . Model for sequence-based data Let \(\phantom {\dot {i}\!}\textit {\textbf {Y}}_{\textit {gt}}=(y_{gt1},\cdots y_{gtn_{\textit {gt}}})\) denote all n gt observed counts of the expressed tags of gene g under condition t for g=1,⋯,G and t=1,⋯,T. We assume that y gkt follows a negative binomial distribution, which is commonly used for the count data with overdispersion [2, 26]. Specifically, we assume y gtk follows \(\mathcal {NB}\left (\phi _{t},\frac {m_{\textit {tk}}\lambda _{\textit {gt}}}{\phi _{t}+m_{\textit {tk}}\lambda _{\textit {gt}}}\right)\), with mean m tk λ gt and variance \(m_{\textit {tk}}\lambda _{\textit {gt}}\left (1+m_{\textit {tk}}\lambda _{\textit {gt}}\phi _{t}^{-1}\right)\). We set m tk to be the library size of the k th sample under the t th condition, which is the sum of all counts from this library. The dispersion parameter ϕ t is assumed to be positive, accounting for potential over-dispersion in the data. When the dispersion parameter ϕ t gets extremely large, the value of \(\phi _{t}^{-1}\) approaches to zero, and the negative binomial distribution becomes a Poisson distribution with a mean value of m tk λ gt . DE genes are expected to have different λ gt 's under different biological conditions. Hierarchical prior distributions We assume that each dispersion parameter ϕ t follows a gamma distribution, \(\phi _{t} \sim \mathcal {G}(\alpha _{\phi }, \beta _{\phi })\) independently over t and its scale parameter β ϕ follows an inverse gamma distribution with \(\beta _{\phi } \sim \mathcal {IG}(\zeta _{\phi },\eta _{\phi })\). We also assume that each gene expression parameter λ gt follows an inverse gamma distribution with \(\lambda _{\textit {gt}} \sim \mathcal {IG}(\alpha _{\lambda _{t}},\beta _{\lambda _{t}})\), where the scale parameter \(\beta _{\lambda _{t}} \sim \mathcal {G}(\zeta _{\lambda },\eta _{\lambda })\). In our simulation studies, we set all the hyperparameters \(\{\alpha _{\phi },\zeta _{\phi }, \eta _{\phi },\alpha _{\lambda _{t}},\zeta _{\lambda },\eta _{\lambda }\}\phantom {\dot {i}\!}\) to be one. Since a negative binomial distribution can be written as a Poisson-gamma distribution, we can rewrite the distribution of y gtk as \(y_{\textit {gtk}} \sim \mathcal {P}oi(\theta _{\textit {gtk}})\), and \(\theta _{\textit {gtk}} \sim \mathcal {G}\left (\phi _{t},m_{\textit {tk}} \lambda _{\textit {gt}} \phi _{t}^{-1}\right)\). Then we can derive all the conditional posterior distributions for all of the parameters. Specifically, the conditional posterior distribution of θ gtk is a gamma distribution with \(\theta _{\textit {gtk}} \sim \mathcal {G}\left (y_{\textit {gtk}}+\phi _{t},\left [1+\frac {\phi _{t}}{m_{\textit {tk}}\lambda _{\textit {gt}}}\right ]^{-1}\right)\), the kernel of the conditional posterior density of ϕ t is given by \(\prod _{\textit {gk}} \left [\frac {\phi _{t}^{\phi _{t}}}{\Gamma (\phi _{t})} \left (\frac {\theta _{\textit {gtk}}}{m_{\textit {tk}}\lambda _{\textit {gt}}}\right)^{\phi _{t}} \exp \left (-\frac {\theta _{\textit {gtk}}}{m_{\textit {tk}}\lambda _{\textit {gt}}}\phi _{t}\right) \right ] \exp \left (-\frac {\phi _{t}}{\beta _{\phi }}\right)\phi _{t}^{\alpha _{\phi }-1} I(\phi _{t}>0)\), the conditional posterior distribution of λ gt is \(\mathcal {IG}\left (\sum _{k}\phi _{t}+\alpha _{\lambda _{t}}, \beta _{\lambda _{t}}+\sum _{k}\frac {\theta _{\textit {gtk}}\phi _{t}}{m_{\textit {tk}}}\right)\), and the hyperparameters β ϕ , and \(\beta _{\lambda _{t}}\) respectively have the conditional posterior distributions: \(\beta _{\phi } \sim \mathcal {IG}\left (T\alpha _{\phi }+\zeta _{\phi }, \sum _{t}\phi _{t}+\eta _{\phi }\right)\), and \(\beta _{\lambda _{t}} \sim \mathcal {G}\left (G\alpha _{\lambda }+\zeta _{\lambda },1/\left (1/\eta _{\lambda }+\sum _{g} 1/\lambda _{\textit {gt}}\right)\right)\). Let θ t denote a set containing all θ gtk 's and λ t as a set containing all λ gt 's for each condition t. We use the Gibbs sampling algorithm to sample parameters \(\phantom {\dot {i}\!}\{\boldsymbol {\theta }_{t}, \boldsymbol {\lambda }_{t}, \beta _{\lambda _{t}}\}\), ∀t, and β ϕ from their conditional posterior distributions. The conditional posterior distribution of ϕ t does not have a known distribution form. These parameters are sampled using the Metropolis-Hastings sampling algorithm from their conditional posterior distributions. Confident difference criterion The confident difference criterion method was extended from the two-criterion method, which was firstly proposed by Ibrahim et al. [13] to detect DE genes for microarray studies with two biological conditions. In this two-criterion method, the fold change between two conditions was defined as \(\xi _{g}=\exp \left (\mu _{g2}+0.5\sigma _{g2}^{2}-\mu _{g1}\right.-\) \(\left.0.5\sigma _{g1}^{2}\right)\), and the posterior probabilities of having at least two fold changes between two conditions, denoted as γ g1=P r(ξ g >2|D obs ) and γ g2=P r(ξ g <1/2|D obs ), were evaluated on each gene to quantify the evidence of its differential expression. A gene is declared to be DE genes if the calculated posterior probabilities γ g1 or γ g2 are sufficiently large. The two-criterion method is easy to compute and provides good false positive and false negative rates [6] for identifying DE genes from microarray studies with two biological conditions. However, the posterior probability γ g defined in this confident difference criterion method does not account for the posterior variability of the fold change, and may not work well for the data with multiple conditions due to the potential multiple comparisons problem since only two conditions can be compared at a time. In this section, we will develop confident difference criterion using a similar idea of the existing two-criterion method to compare mean expressions (Method I) after taking into account the posterior variability of the mean intensity parameters for the microarray data with two biological conditions. Then we extend the newly developed confident difference criterion method for the microarray data with multiple biological conditions. Furthermore, we will develop another version of the confident difference criterion method to compare both means and variances of the expressions (Method II) for the microarray data. Finally, we extend the confident difference criterion method for comparing mean differential expressions of microarray data (Method I) to the analysis of RNA-Seq data (Method I). Confident difference criterion for the comparison between mean expressions for the microarray data Microarray study with two conditions For a study with two biological conditions, μ g2−μ g1 quantifies the difference in the mean intensities of gene g between the two conditions and its conditional posterior distribution follows a normal distribution. We define the posterior probability as $$ \gamma_{g}=Pr\left(\frac{|\mu_{g2}-\mu_{g1}|}{\sigma_{\mu_{g2}-\mu_{g1}}}>2 \Bigg| D_{obs}\right), $$ where \(\sigma _{\mu _{g2}-\mu _{g1}}\) is the posterior standard deviation of μ g2−μ g1. Then we select a cutoff value γ 0 (0<γ 0<1) and declare a gene to be DE if its posterior probability γ g is greater than the cutoff value γ 0. Note that the choice of γ 0 reflects how strong the evidence is for declaring DE genes. When a larger value is specified for γ 0, fewer genes will be selected to be DE. In the two-criterion method, Chen et al. [6] recommended to use a large cutoff value (ranging between 0.7 and 0.9) because they did not adjust for the posterior variability of the fold change when comparing the gene intensities between the two conditions. After adjusting for the posterior variability, γ g in (1) is quite different than the corresponding posterior probability under the two-criterion method of Chen et al. [6], as shown in the following proposition. Proposition 1. Assume that the difference in the mean intensities, μ g2−μ g1, follows a normal distribution. The proposed confident difference criterion method ensures that if γ g ≥γ 0, then the maximum value of the posterior probabilities for the difference μ g2−μ g1 being larger or smaller than zero, i.e., max{P r(μ g2−μ g1>0 |D obs ), P r(μ g2−μ g1<0 |D obs )}, is at least Φ(2−Φ −1(1+Φ(−2)−γ 0)) for γ 0>Φ(−2), where Φ and Φ −1 denote the cumulative distribution function (cdf) and the inverse cdf of the standard N(0,1) distribution, respectively. The detailed proof is presented in Additional file 1. We note that the maximum value of the posterior probabilities for the difference μ g2−μ g1 being larger or smaller than zero measures a Bayesian p-value. Figure 1 shows a graphical presentation of the Proposition 1 with γ 0 chosen to be 0.5 and 0.7, respectively. For example, we use \(\xi _{\mu _{g2}-\mu _{g1}}\) to denote the posterior mean value of the difference μ g2−μ g1. When γ 0=0.5 and assuming that the posterior mean value \(\xi _{\mu _{g2}-\mu _{g1}}>0\), \(\xi _{\mu _{g2}-\mu _{g1}}\) is at least \(1.94 \sigma _{\mu _{g2}-\mu _{g1}}\) away from zero. The maximum value of the posterior probabilities for the difference μ g2−μ g1 being larger or smaller than zero, max{P r(μ g2−μ g1>0 |D obs ), P r(μ g2−μ g1<0 |D obs )}, will be at least Φ(2−Φ −1(1+Φ(−2)−0.5))=97.4 %. Therefore, we recommend to use a smaller cutoff value than the previous two-criterion method [6] when using (1) for identifying DE genes. Possible choices of the cutoff value γ 0 may range from 0.4 to 0.7. Graphical illustration of the confident difference criterion method. The figure on the left panel and the right panel uses γ 0=0.5 and γ 0=0.7 separately. The μ g2−μ g1 measures the difference in the mean intensities of gene g between the two conditions. Both figures are drawn based on an assumption that the posterior mean of μ g2−μ g1 are positive. The shaded area in both figures measures the posterior probability for having a positive difference μ g2−μ g1 Connection with the CBF method for microarry study with two conditions For a microarray study with two biological conditions, we assume that the preprocessed expression intensity from each biological condition follows a normal distribution with \(x_{\textit {gtk}} \sim N\left (\mu _{\textit {gt}},\sigma _{\textit {gt}}^{2}\right)\), and the parameters follow the prior distribution specified in the aforementioned Model for microarray data subsection. For simplicity, we assume that the equal number of intensities are observed from the same gene under different conditions, and they share the same known variance, i.e., n g1=n g2=n g and \(\sigma ^{2}_{g1}=\sigma ^{2}_{g2}={\sigma ^{2}_{g}}\). The proposed confident difference criterion method is used to detect differentially expressed genes. Alternatively, we can also apply the CBF method for the data analysis. To detect differentially expressed genes, we test on the null hypothesis that the mean intensities are equal (μ g1=μ g2) against the alternative hypothesis that the mean intensities are unequal (μ g1≠μ g2) between the two biological conditions. We use the same prior distributions as that in the confident difference criterion method under the alternative hypothesis, and similar prior distributions for the parameters under the null hypothesis. With simple algebra, we can show that the proposed confident difference criterion method for comparing the mean intensities between two biological conditions agrees with the CBF method under the condition stated in the following Proposition. The confident difference criterion method comparing the mean intensities between the two biological conditions with a cut-off value of γ 0 agrees with the CBF method for the hypotheses testing on whether the mean intensities are equal between the two biological conditions with a cut-off value of C 0 if \(\Phi \left (2+E_{g}^{\ast }\right)-\Phi \left (-2+E_{g}^{\ast }\right)=1-\gamma _{0}\), where \(|E_{g}^{\ast }|= \left [ \log \left (\frac {n_{g}\omega ^{2}}{2{\sigma ^{2}_{g}}}+1\right) -2\log (C_{0})\right ]^{\frac {1}{2}}\), provided that the cutoff value C 0 is chosen so that the argument in the square root expression is non-negative. The detailed proof is presented in Additional file 1. Microarray study with multiple conditions The confident difference criterion method can be extended to microarray studies with multiple biological conditions. Our primary interest of the study is to identify genes that have differential expressions at least between two biological conditions. Therefore, we define a quadratic form to quantify the differences in the gene expression intensities across different biological conditions, and conduct an overall test to determine whether the mean intensities are different at least under two biological conditions on each gene. Considering the first biological condition as a reference condition, we define a column vector Δ μ,g =(μ g2−μ g1,⋯,μ gT −μ g1)′ to measure the difference in the mean intensities between each non-reference biological condition and the reference condition. Let \(\phantom {\dot {i}\!}\Sigma _{\boldsymbol {\Delta }_{\mu, g}}\) be the posterior covariance matrix of Δ μ,g . We then propose the quadratic form, \(\boldsymbol {\Delta }_{\mu, g}'\Sigma ^{-1}_{\boldsymbol {\Delta }_{\mu, g}}\boldsymbol {\Delta }_{\mu, g}\), to quantify the differential gene expressions for all non-reference biological conditions compared to the reference condition. Under the null hypothesis that gene g is not DE, the quadratic form follows a chi-square distribution with d f=T−1 when Δ μ,g is assumed to follow a multivariate normal distribution. We note that the multivariate normality holds asymptotically when the sample size is large. We choose an integer, denoted as C, which is closest to the 95 th percentile of the chi-square distribution. For example, for a microarray study with three biological conditions (i.e., T=3), the corresponding C value equals 6. Similar to (1), we compute the posterior probability $$ \gamma_{g}=Pr\left(\boldsymbol{\Delta}_{\mu, g}'\Sigma^{-1}_{\boldsymbol{\Delta}_{\mu, g}} \boldsymbol{\Delta}_{\mu, g}>C|D_{obs}\right), $$ and declare gene g to be DE if γ g ≥γ 0. Confident difference criterion for comparison of both means and variances of expression for microarray data We note that the confident difference criterion method proposed so far only evaluates the differences in mean intensities. Recall that the Bayes factor approach in Yu et al. [36] is more desirable since it compares both means and variances of the intensities for each gene. Assume that the means and variances are equally important. An appropriate quadratic form can be constructed to quantify the overall difference between both the means and the variances under different conditions on each gene. Since the posterior distribution of \(\sigma _{\textit {gt}}^{2}\)'s is typically skewed, a stabilization transformation of the variance \(\sigma _{\textit {gt}}^{2}\) is required. Let q(.) denote a one-to-one transformation function. The differences in both means and transformed variances of the intensities across different conditions can be summarized in a quadratic form given by $$ Q_{g}=\boldsymbol{\Delta}_{\boldsymbol{\mu},\boldsymbol{\sigma},g}' \Sigma^{-1}_{\boldsymbol{\Delta}_{\boldsymbol{\mu},\boldsymbol{\sigma}, g}} \boldsymbol{\Delta}_{\boldsymbol{\mu},\boldsymbol{\sigma}, g}, $$ where \(\boldsymbol {\Delta }_{\boldsymbol {\mu }, \boldsymbol {\sigma },g}=\left (\mu _{g2}-\mu _{g1},\cdots, \mu _{\textit {gT}}-\mu _{g1}, q\left (\sigma ^{2}_{g2}\right)-q \left (\sigma ^{2}_{g1}\right),\cdots, q\left (\sigma ^{2}_{\textit {gT}}\right)-q\left (\sigma ^{2}_{g1}\right)\right)'\) is a column vector of length 2T−2 containing the differences in both means and transformed variances of the intensities. The covariance matrix \(\Sigma _{\boldsymbol {\Delta }_{\boldsymbol {\mu },\boldsymbol {\sigma },g}}\phantom {\dot {i}\!}\) is the posterior covariance matrix of Δ μ,σ,g . Since q(·) is a one-to-one transformation function, then we have \( \sigma ^{2}_{\textit {gt}}=\sigma ^{2}_{gt'}\) if and only if \( q\left (\sigma ^{2}_{\textit {gt}}\right)=q\left (\sigma ^{2}_{gt'}\right)\) for t≠t ′. Thus, the same q(·) function has to be used across all the T treatment groups. The primary reason for introducing the transformation function q(·) is to make the distribution of \(q\left (\sigma ^{2}_{\textit {gt}}\right)\) more normal. Similar to (2), we compute the posterior probability γ g =P r(Q g >C|D obs ), where C is chosen to be an integer, which is closest to the 95 th percentile of the chi-square distribution with d f=2T−2. For example, C will be chosen to be 9 when T=2, and 13 when T=3. In this paper, we consider the negative cube root transformation on the variance parameters \(\sigma ^{2}_{\textit {gt}}\)'s. The cube root transformation, also known as Wilson-Hilferty transformation, was derived by Wilson and Hilferty [35] to transform a chi-square variate to be approximately normally distributed. In the proposed gene selection algorithm below, the cutoff value γ 0 will be automatically determined to control the false discovery rate to be less than a targeted level. Confident difference criterion for sequence-based data As discussed in the Model for sequence-based data subsection, the parameter λ gt quantifies the expression level of gene g under condition t. The differences in λ gt 's measure the relative differential expressions of gene g between the conditions. Note that the λ gt 's likely have small values and their posterior distributions may be skewed. Therefore, we apply a log transformation on λ gt 's and use the differences in logλ gt 's to quantify the differential gene expressions among different biological conditions. Similar to the confident difference criterion method for microarray data, we propose the confident difference criterion method for the sequence-based data with two biological conditions as follows. We first compute $$ \gamma_{g}=Pr\left(\frac{|\log(\lambda_{g2})-\log(\lambda_{g1})|} {\sigma_{\log(\lambda_{g2})-\log(\lambda_{g1})}}>2|D_{obs}\right), $$ where \(\sigma _{\log (\lambda _{g2})-\log (\lambda _{g1})}\) is the posterior standard deviation for the difference log(λ g2)− log(λ g1). We then declare gene g to be DE if γ g ≥γ 0, where 0<γ 0<1 is a predetermined credible level. When the sequence-based data are collected from multiple conditions, the first biological condition will be considered as the reference condition and a column vector Δ λ,g =(log(λ g2)− log(λ g1), log(λ g3)− log(λ g1),⋯, log(λ gT )− log(λ g1))′ contains the differences in the log scaled expression values between the non-reference conditions and the reference condition. Let \({\Sigma }_{\boldsymbol {\Delta }_{\lambda,g}}\phantom {\dot {i}\!}\) denote the posterior covariance matrix of Δ λ,g . Accordingly, the confident difference criterion method is defined as \(\gamma _{g}=Pr\left ({\boldsymbol {\Delta }}_{\lambda,g}'\mathbf {\Sigma }^{-1}_{{\boldsymbol {\Delta }}_{\lambda,g}} {\boldsymbol {\Delta }}_{\lambda,g}>C_{\lambda }|D_{\textit {obs}}\right)\), where C λ is an integer, which is closest to the 95 th percentile of the chi-square distribution with d f=T−1. Again, we declare gene g to be DE if γ g ≥γ 0, where 0<γ 0<1. From the Model for sequence-based data subsection, we see that the variance of the observed count y gtk is \(m_{\textit {tk}}\lambda _{\textit {gt}} \left (1+ m_{\textit {tk}} \lambda _{\textit {gt}} \phi ^{-1}_{t}\right)\), which is a function of λ gt and ϕ t , for the sequence data. Since ϕ t does not depend on g, it is sufficient to compare the mean expressions under different conditions for determining DE genes for the sequence data. False discovery rate and gene selection algorithm The proposed confident difference criterion methods calculate the value of γ g on each gene, whose magnitude reflects the evidence of differential expression. When γ g is large enough, the gene will be declared to be DE. It is of great importance to determine how to choose the cutoff value γ 0. We adopt the approach proposed by Tadesse et al. [31] to select the cutoff value γ 0 for controlling the Bayesian FDR. Let V denote the number of incorrect decisions by identifying EE genes as DE genes and let R be the number of identified DE genes. Then the positive false discovery rate defined by Storey [30] is given by \(pFDR=E\left (\frac {V}{R}|R>0\right)\). We need to test the hypotheses of H 0g : gene g is EE versus H 1g : gene g is DE on each gene. We assume that all genes have the same probability of being EE, and DE, respectively, i.e., P r(H 0g )'s are equal for all genes, and P r(H 1g )'s are equal for all genes. Therefore, the γ g 's are independently and identically distributed. Following Tadesse et al. [31], the Bayesian FDR b F D R(γ 0) when using a cutoff value of γ 0 for the confident difference criterion method is defined as $$ bFDR(\gamma_{0})=\frac{1}{Pr(R>0)}\times \frac{Pr\left(\gamma_{g} \ge \gamma_{0}|H_{0g}\right)Pr(H_{0g})}{Pr(\gamma_{g} \ge \gamma_{0})}, $$ and P r(γ g ≥γ 0)=P r(γ g ≥γ 0|H 0g )P r(H 0g )+P r(γ g ≥γ 0|H 1g )P r(H 1g ). To estimate the FDR, we need to compute P r(γ g ≥γ 0|H 0g ), P r(γ g ≥γ 0|H 1g ) and P r(H 1g ). Note that gene g can be classified into DE or EE depending on whether γ g ≥γ 0. We reuse the data information and specify the prior probability P r(H 1g ) as the proportion of genes classified as DE. Denote the total number of identified DE genes as n D . Then the probability of a gene being DE will be P r(H 1g )=n D /G. Additionally, we estimate the true parameters in the gene expression data distributions from DE or EE genes as the posterior means of the corresponding parameters from the identified DE and EE genes, respectively. An algorithm using the posterior samples from DE or EE genes to estimate the aforementioned probabilities P r(γ g ≥γ 0|H 0g ), P r(γ g ≥γ 0|H 1g ) and b F D R(γ 0) is given as follows. Split the genes into two subsets containing n E EE genes (EEGENE) with calculated γ g <γ 0 and n D DE genes (DEGENE) with γ g ≥γ 0. Note that a DE gene can be either up or down regulated under some condition compared to the reference condition in terms of means or variances of the expression values for microarray experiment or in terms of mean gene expressions for sequence-based experiment. Accordingly, the DE genes will be further split into a series of gene subsets based on the pattern of parameters in comparisons under different biological conditions. For example, in a microarray study with three biological conditions and the mean gene expressions are in comparison. Consider the first condition as the reference condition. The DE genes can be classified into four subsets: (i) genes with lower mean gene expressions under both conditions 2 and 3; (ii) genes with lower mean gene expressions under condition 2 but higher mean gene expressions under condition 3; (iii) genes with higher mean gene expressions under condition 2 but lower mean gene expressions under condition 3; and (iv) genes with higher mean gene expressions under both conditions 2 and 3. We denote these subsets of DE genes as D ℓ ,ℓ=1,⋯,L, where the number of subsets L=2T−1 for the microarray study when only mean parameters are in comparison or the sequenced-based study; and L=4T−1 for the microarray study when both mean and variance parameters are in comparison. We also denote the number of genes in the D ℓ DE gene subsets as n D ℓ . For EE genes identified in previous steps, the same data distributions will be considered for the gene expression data from different biological conditions. Hierarchical priors similar to those proposed previously in the Model for microarray data section and the Model for sequence-based data subsection will be augmented separately for microarry data or sequence-based data. The posterior mean of each parameter defined in the distribution of the gene expression data will be calculated. The true parameters in the gene expression data distribution will be estimated using the average value of posterior mean of corresponding parameters from all EE genes. For all identified DE genes, the Markov chain Monte Carlo (MCMC) sampling values from previous steps when implementing the proposed confident difference criterion method will be used for calculating the posterior means of the parameters defined in the gene expression data distribution. For each differential gene expression pattern ℓ, the actual value of each parameter in the gene expression data distribution will be estimated using the average value of its posterior means across all genes in the subset D ℓ with this DE pattern. Using the estimated values for the parameters in the gene expression data, the data will be simulated for κ×G genes (say κ=0.1), among which κ n E EE genes and \(\kappa n_{D_{\ell }}, \ell =1,\dots,L\) DE genes with a pattern of differential gene expression observed in the DE gene subset D ℓ , respectively. The posterior probability γ g will be calculated for each gene based on the simulated data. Depending on whether γ g ≥γ 0, the gene in the simulated data will also be claimed to be DE or EE. Denote the total number of identified DE and EE genes from the simulated data as m D and m E . Then the probability for a EE genes claimed to be DE, P r(γ g ≥γ 0|H 0g ), will be estimated as \(Pr(\gamma _{g} \ge \gamma _{0}|H_{0g})=\frac {m_{E}}{\kappa n_{E}}\); and the probability for a DE genes claimed to be DE, P r(γ g ≥γ 0|H 1g ), will be estimated as \(Pr(\gamma _{g} \ge \gamma _{0}|H_{1g})=\frac {m_{D}}{\kappa n_{D}}\). Using (5), the estimated Bayesian FDR equals \(\widehat {bFDR}=\frac {m_{E}}{m_{D}+m_{E}}\). Note that steps (4) to (6) provide a predictive approach to estimate bFDR when a certain value γ 0 was used for identifying DE genes. Therefore, we can control the FDR at some pre-specified value (i.e. 0.05) by choosing the corresponding cutoff value \(\hat {\gamma _{0}}\) as the minimum value of all cutoff values with an associated FDR no more than 0.05, or \(\hat {\gamma }_{0}=\min \{\gamma _{0}: (\widehat {bFDR}(\gamma) \le 0.05)\}\). In this section, two different simulation studies were conducted to investigate the performance of the proposed confident difference criterion methods on identifying DE genes for microarray or sequence-based studies, respectively. In addition, a real affymetrix dataset is used to further demonstrate the proposed methodology. Simulation study I: Microarray data Two settings were considered. In the first setting, the intensity values having different means and variances between two biological conditions on each DE gene are simulated. In the second setting, the data were simulated from three biological conditions, with DE genes having different mean and variance values between at least two biological conditions. Setting 1 (Two conditions) Fifty simulations were used in this study to investigate the performance of different versions of the confident difference criterion methods described in the Confident difference criterion section. In each simulation, there were 5000 genes in total and 500 DE genes with 10 replications under each of the two biological groups. The log-scaled data were generated via \(x_{g1k} \overset {\text {iid}}{\sim } \mathcal {N}(\mu _{g}-0.5 \delta _{g}, 0.2^{2}), x_{g2k} \overset {\text {iid}}{\sim } \mathcal {N}\left (\mu _{g}+0.5\delta _{g}, 0.9^{2}\right)\) with δ g =1,∀g=1,…,250 and δ g =−1,∀g=251,…,500 for the DE genes, and \(x_{g1k},x_{g2k} \overset {\text {iid}}{\sim } \mathcal {N}(\mu _{g}, 0.7^{2})\) for the remaining EE genes. The average intensities μ g were generated from an uniform distribution, where \(\mu _{g} \overset {\text {iid}}{\sim } \mathcal {U} (5,11)\) for all genes. Conditionally conjugate priors described in the Model for microarray data subsection were used for all parameters μ g , δ g , \(\sigma _{g1}^{2}\), \(\sigma _{g2}^{2}\) and \({\sigma _{g}^{2}}\). The simulated data were analyzed using both Methods I and II of the confident difference criteroin methods. For each version, the cutoff value γ 0 were set to be 0.4, 0.6, or the cutoff value controlling the FDR to be no more than 0.05, separately. The genes with the calculated posterior distribution values γ g via Equation (1) or Equation (3) less than the chosen γ 0 were identified to be DE. To evaluate the performance of the confident difference criterion methods, the simulated datasets were also analyzed by four existing methods: Significant Analysis of Microarrays (SAM) [33], Linear Models for Microarray Data (LIMMA) [28], Semiparametric Hierarchical Method (SPH) [23], Empirical Bayesian Analysis of Gene Expression Data (EBarrays or EBA) [14]. All these existing methods allowed a control of the FDR for multiple comparisons. The genes were declared to be DE with FDR controlled at 0.05 for all these four methods. Based on the identified gene list by each method, we calculated the number of claimed DE genes (CDE), the number of correctly claimed DE genes (CCDE), the number of correctly claimed EE genes (CCEE), the false positive rate (FPR), false negative rate (FNR), false discovery rate (FDR) and false non-discovery rate (FNDR) for all considered methods. These results and their standard deviations reported in parentheses were summarized in Table 1. Note, for Methods I and II, the choice of γ 0=0.4 identified the a larger number of DE genes when compared to the choice of γ 0=0.6. While for the case with FDR control of 0.05, the Method II identified the largest number of DE genes among all six methods. We also compared the results of the confident difference criterion method with a control of FDR against all four existing methods. We expected a method with good performance will provide a good control of FDR and provide smaller error rates in terms of FPR, FNR and FNDR. Under both versions of the confident difference criterion method, the achieved FDR is close to but less than 0.05, implying that the proposed confident difference criterion methods provided a good control of the FDR. All four existing methods also obtained a control of FDR at 0.05 successfully, although the SPH method provides a conservative control of FDR with the empirical FDR equal to 0.02. Since all methods provided small error rates of FPR and FNDR, we put more weight to the comparison of the empirical FNRs among all applied methods. The results in Table 1 showed that Method II provided the smallest empirical FNR out of all methods by successfully identifying almost all truly DE genes; and Method I had comparable empirical FNR as the SAM and the LIMMA methods, and much smaller empirical FNR when compared to the SPH and the EBA methods. Table 1 Performance evaluation under Study I (Setting 1), (G=5000, 500 DE gene) # Setting 2 (Three conditions) The data were simulated from three biological conditions, and the first biological condition was considered as the reference group. A gene was set to be DE so that at least one group would be either up or down regulated from the reference group. Specifically, 500 DE genes out of 5000 genes were set in the data, and the log intensities of the DE genes were generated via \(x_{g1k} \overset {\text {iid}}{\sim } \mathcal {N}\left (\mu _{g1}, 0.2^{2}\right), x_{g2k} \overset {\text {iid}}{\sim } \mathcal {N}\left (\mu _{g1}+0.5\nu _{g1}, 0.5^{2}\right), x_{g3k} \overset {\text {iid}}{\sim } \mathcal {N}\left (\mu _{g1}+0.5 \nu _{g2}, 0.8^{2}\right)\). Depending on whether the gene was set to be DE in one or both conditions from reference group, the parameters ν g1 and ν g2 were set to have ν g1=ν g2=1.5 for g=1,⋯,62 (up-regulated in both conditions); ν g1=1.5, ν g2=0 for g=63,⋯,125 (only up-regulated in condition 2); ν g1=1.5, ν g2=−1.5 for g=126,⋯,187 (up-regulated in condition 2, down-regulated in condition 3); ν g1=0 and ν g2=1.5 for g=188,⋯,250 (only up-regulated in condition 3); ν g1=0, ν g2=−1.5 for g=251,⋯,312 (only down-regulated in condition 3); ν g1=−1.5, ν g2=1.5 for g=313,⋯, 375 (down-regulated in condition 2, up-regulated in condition 3); ν g1=−1.5, ν g2=0 for g=376, ⋯,437 (only down-regulated in condition 2); ν g1=ν g2=−1.5, for g=438,⋯,500 (down-regulated in both conditions). The remaining genes were EE and their log intensities were generated via \(x_{\textit {gtk}} \overset {\text {iid}}{\sim } \mathcal {N}\left (\mu _{g}, 0.6^{2}\right)\) for t=1,2,3 and g=501,⋯,5000. On all genes, the parameter μ g1 were generated from an uniform distribution, i.e., \(\mu _{g1} \overset {\text {iid}}{\sim } \mathcal {U}(5,11)\). Each condition contained 10 replicates on each gene and 50 simulations were conducted. The model similar to those described in Model for microarray data subsection and the proposed confident difference criterion methods including the Method I for comparing mean expressions and the Method II comparing both mean and variance expressions were applied to the simulated data. We considered three choices for the cutoff value γ 0, including prespecified values 0.4, 0.6, or a value with FDR controlled at 0.05, separately. The data were also analyzed by the SAM [33], LIMMA [28], and EBArrays [14] with the FDR controlled at 0.05. The SPH [23] were not used in the study as they were proposed for studies with two biological conditions only. The analytical results from all methods were compared based on four error rates including FPR, FNR, FDR and FNDR from each considered method (Table 2). The confident difference criterion methods including Method I and Method II as well as the existing methods except LIMMA all provided an empirical FDR no more than 0.05 successfully. Comparing to the existing methods, the proposed confident difference criterion methods provided comparable FPR and smaller FNR and FNDR. Method II of the confident difference criterion method compares both mean and variance values of the gene expression intensities across different biological conditions. This is a potential reason for the proposed method providing smaller FNR for microarray data analysis. The confident difference criterion method is particularly effective when both mean and variance of the expression intensities differ across biological conditions on the DE genes. Simulation Study II: sequence-based data The focus of this study is to investigate the performance of the proposed confident difference criterion method for identifying DE genes from sequence-based high-throughput experiments including SAGE and RNA-Seq studies. Setting 1 (SAGE experiment) The simulation proposed by Lu et al. [20] was used to conduct the simulation study. Specifically, 5000 genes were sampled under five libraries from each of the two conditions with fixed library sizes sampled uniformly between 30000 and 90000. A total of 500 genes were set to be DE genes. The data were generated from a negative binomial distribution, \(y_{\textit {gtk}} \overset {\text {iid}}{\sim } \mathcal {NB}(\phi _{t}, \frac {m_{\textit {tk}}\lambda _{\textit {gt}}}{\phi _{t}+m_{\textit {tk}}\lambda _{\textit {gt}}})\) for gene g, for a fixed library k of condition t, where m tk was the library size for library k under condition t; ϕ 1 and ϕ 2 denoted the dispersion parameters for data from the two conditions separately, and both set to be 0.4; λ gt measured the expression level of gene g under condition t and were set with different values when gene g is DE and the same value when gene g is EE. For g=1,⋯,250, we set λ g1=8E−4 and λ g2=2E−4 to include down-regulated genes in condition 2. For g=251,⋯,500, we set λ g1=2E−4 and λ g2=8E−4 to include up-regulated genes in condition 2. For other genes with g=501,⋯,5000, we set λ g1=λ g2=2E−4 to include EE genes. Fifty simulations were used in this study. The proposed confident difference criterion method for RNA-Seq data was used to analyze the simulated data. The posterior probability γ g measuring the evidence of differential gene expression were estimated using average value of its posterior sampled values. The cutoff value γ 0 for γ g were set to be 0.4, 0.6 or a value to control the FDR to be 0.05, separately. The genes with estimated γ g less than the chosen γ 0 value were claimed to be DE. We also fit several other existing methods including edgeR [26], DESeq [2], BaySeq [10], NBPSeq [8], EBSeq [17], NOISeq [32], SAMSeq [19], and TSPM [3]. When the edgeR method was applied, we chose both options to estimate the common dispersion parameter for all tags and the tag-wise dispersion parameters respectively. For the NOISeq method, we estimated and controlled the FDR using the method proposed by Newton et al. [23] for identifying DE genes. The results using the proposed confident difference criterion methods and all fitted existing methods for RNA-Seq data were summarized in Table 3. Similar to the simulation study I for microarray data, Table 3 showed that the higher the cutoff value γ 0, the less number of genes were identified to be DE. The confident difference criterion method with a control of FDR at 0.05 achieved an empirical FDR of 0.044, and successfully identified 328.8 genes (65.8 %) on average out of 500 truly DE genes. Compared to other considered methods, the confident difference criterion method performed the best by providing the smallest FNR and FNDR while maintaining comparable FPR and a well controlled FDR. Out of the applied existing methods, the NOISeq method and edgeR method achieved the lowest FNR, and a FDR of no more than 0.05. The BaySeq method provided a conservative control of FDR, and achieved an empirical FDR of lower than 0.001 when controlling the FDR at 0.05. The DESeq, EBSeq and TSPM methods failed to control the FDR at 0.05. The SAMSeq method and TSPM method failed to identify most of the truly DE genes as DE genes, which was not surprising as the performance of both the SAMSeq and TSPM methods is highly sample size dependent as pointed out by Soneson and Delorenzi (2013) [29]. Table 3 Performance evaluation under Study II (Setting 1), (G=5000, 500 DE gene) # Setting 2 (RNA-Seq experiment) We used a similar simulation setting proposed by Kvam et al. [16] for illustrating the application of the proposed confident difference criterion method for RNA-Seq experiment. We still simulated 50 dataset, each dataset contained six libraries with three libraries from each of the two conditions on 5000 genes, among which 250 genes were set to be up-regulated genes and another 250 genes were set to be down-regulated genes in condition 2 versus condition 1. The overall mean expression levels across both conditions were generated from a gamma distribution with \(\lambda _{g} \sim \mathcal {G}(0.25, 600)\). To avoid including genes with low expression levels from both conditions as DE genes, we set the difference in the gene expression levels between conditions in two ways depending on whether the value of λ g is larger than one. Specifically, we generated ξ g from uniform distribution \(\mathcal {U}(3,20)\) for each gene. If the value of λ g >1, we let the fold change between the gene expression values of DE genes to be ξ g , or \(\lambda _{g1}=\lambda _{g}/\sqrt {\xi _{g}}\) and \(\lambda _{g2}=\lambda _{g}*\sqrt {\xi _{g}}\) for up-regulated genes, and \(\lambda _{g1}=\lambda _{g}*\sqrt {\xi _{g}}\) and \(\lambda _{g2}=\lambda _{g}/\sqrt {\xi _{g}}\) for down-regulated genes. If the value of λ g ≤1, we let the absolute difference in the gene expression values to be ξ g , or we let λ g1=λ g +ξ g and λ g2=λ g for down-regulated genes, and λ g1=λ g and λ g2=λ g +ξ g for up-regulated genes in condition 2. For an EE gene, we had λ g1=λ g2=λ g . Then we generated the data using negative binomial distribution of \(y_{\textit {gtk}} \overset {\text {iid}}{\sim } \mathcal {NB}(\phi _{t}, \frac {\lambda _{\textit {gt}}}{\phi _{t}+\lambda _{\textit {gt}}})\) for gene g, and the overdispersion parameters ϕ 1 and ϕ 2 were set to have ϕ 1 = 1 and ϕ 2 = 8 respectively for DE genes; and ϕ 1=ϕ 2=4 for EE genes. All methods applied in setting I of simulation study II were also used for data analysis in this simulation study. The results in Table 4 displayed that the confident difference criterion method with a control of FDR at 0.05, the edgeR method with common dispersion parameter over genes, the edgeR with gene-wise dispersion parameter, the BaySeq, the NBPSeq, the NOISeq methods successfully controlled the FDR at 0.05. Additionally the confident difference criterion method, the NBPSeq method, the edgeR method with a common dispersion parameter over genes also provided a good and comparable control of FNR of less than 0.2, while maintaining low levels of FPR and FNDR. Table 4 Performance evaluation under Study II (Setting II), (G=5000, 500 DE gene) # Real data analysis We used a real data set obtained using customized Bovine Affymetrix arrays (Davis, Talbott, Yu, and Cupp, unpublished results) to illustrate the proposed method. Fifteen arrays composed of three replicate arrays under three biological conditions were produced to screen for DE genes associated with prostaglandin F2α(PGF) treatment after 30 min, 1 h, 2 h, and 4 h compared to the control treatment (saline). For simplicity, we focused on detecting genes using the confident difference criterion methods (Method I and Method II) that were regulated 1 h or 2 h after PGF treatment. The data were extracted, normalized and summarized using the Robust Multi-array Average (RMA) [12] method at the exon level via the Affymetrix expression console. The data set contains 21724 genes. Note that some genes may have multiple probe replicates ranging from one replicate to 266 replicates, and the data from different probes of the same gene may have large variation even after RMA normalization. We centered the data from each probe of the same gene to the mean log intensities of that gene, and excluded 3116 genes with only a single probe replicate from the analysis to make sure that the parameters were estimable. Additionally, we excluded 2137 low expression genes if two-thirds or more (six out of nine) samples on this gene had gene expression values measured by the geometric mean expression values across different probes less than 10. Of the remaining 16471 genes with replicate probes, we used z gjtk to denote the k th biological replicate sample of the log2 scale gene expression intensity for probe j of gene g under condition t. Note that the index j was added to the previous notations for the log intensity values as data are available for multiple probes on the same gene. We assumed normal distribution for the log2 intensities with \(z_{\textit {gjtk}} \sim \mathcal {N}(\mu _{\textit {gtk}}, \sigma ^{2\ast }_{\textit {gt}})\), and the same prior for μ gtk as what we set for X gt in the Model for microarray data subsection. The variance parameters are assumed to follow inverse gamma distribution with \(\sigma ^{2\ast }_{\textit {gt}} \sim \mathcal {IG}(\alpha ^{\ast }_{t},\beta ^{\ast }_{t})\) with \(\alpha ^{\ast }_{t}=2\) and \(\beta ^{\ast }_{t} \sim \mathcal {G}(\alpha ^{\ast }_{0},\beta ^{\ast }_{0})\). We set \(\alpha ^{\ast }_{0}=1\) and \(\beta ^{\ast }_{0}\sim IG(\alpha ^{\ast },\beta ^{\ast })\) where both α ∗=β ∗=1. During computation for controlling the FDR, we reuse these settings of the prior distributions on the parameters μ gtk and \(\sigma ^{2\ast }_{\textit {gt}}\) for DE genes. For EE gens, we assume that \(z_{\textit {gjtk}} \sim \mathcal {N}(\mu _{\textit {gtk}},\sigma ^{2\ast }_{g})\), and make similar augment for the prior distributions of their parameters μ gtk and \(\sigma ^{2\ast }_{g}\) as the DE genes. The proposed confident difference criterion methods were applied to assess the evidence of differential expression on each gene and identify DE genes with the cutoff value equal to be 0.4, 0.6 or a value that controls the FDR at 0.05. In addition, we analyzed the real data using the existing methods including SAM, LIMMA, and EBarrays as described in the Simulation Studies section for identification of DE genes. Since the existing methods were developed for data with single probe replicate on each gene, we calculated the mean log intensities over all probes for each biological sample on each gene to quantify the corresponding gene expression. The genes were declared to be DE if the false discovery rate was no more than 0.05. We used Venn diagrams to demonstrate the overlap of DE genes identified by Method I (Fig. 2, Left Panel) or Method II (Fig. 2, Right Panel), to the DE genes identified by SAM and EBarrays (Fig. 2). The results showed that more genes were identified to be DE by the proposed Method I and Method II than the existing methods. Specifically, 1050 DE genes were identified by Method II, while 896 genes were identified to be DE by either SAM or EBarrays. Of note 340 out of 353 DE genes identified by LIMMA were also identified by SAM (data not shown), and 951 of 991 DE genes identified by Method I were also identified as DE by Method II. We found that SAM identified 375 DE genes, all of which were also identified by other methods. For example, 358 (95.5 %) genes identified by SAM were also identified by Method I or II; and 342 (91.2 %) genes identified by SAM were also identified by EBarrays method. The EBarrays method identified 863 DE genes, out of which, 643 (74.5 %) genes were also identified by Method I or II. Method I identified 116 of the 324 genes identified by LIMMA when comparing all four time points versus control in the same dataset, while Method II called 105 out of 387 genes DE that were also called DE by LIMMA within the whole dataset. In addition, many genes identified to be DE only by Method II not by Method I show a linear trend among the average gene expression across conditions observed from samples collected with longer time after treatment, and larger data variations under the control condition than those observed at other time points after treatment. For example, the average log2 gene expression of THBS1 increased from 9.22 under control condition to 10.35 at 2h after treatment, and the standard deviation equaled 0.88 under the control condition, and 0.37 at 2 h after treatment. This gene was only detected to be DE by Method II and was shown to play roles in angiogenesis [37]. Number of identified DE genes out of 16471 genes from real data analysis. Two venndiagrams present the overlapping among the DE genes identified separately by Method I/II, SAM, and EBarrays with the false discovery rate controlled at 0.05 from the real data The genes identified solely by Method I or Method II were analyzed by Ingenuity Pathway Analysis (IPA, Build version: 313398M, Content version: 18841524 (Release Date: 2014-06-24) to determine biological functions and pathways associated with the newly identified genes. Genes identified solely by Method I and not by SAM or EBarrays were analyzed by IPA which identified several major canonical pathways such as hepatic fibrosis / hepatic stellate cell activation, glucocortiocoid receptor signaling, agranulocyte adhesion and diapedesis, and role of IL-17A in arthritis (Additional file 2: Table S1). Many of the canonical pathways identified have either established or potential roles in corpus luteum function indicating that Method I identified DE genes that are biologically relevant within the model. Method I also identified IL1B (P=2.12E−08) and TNF (P=3.03E−08) as upstream regulators of the genes found exclusively by Method I, which also fits with known and suspected mechanisms of PGF action within the corpus luteum [1, 24]. Genes identified solely by Method II were also analyzed by IPA which identified canonical pathways such as hepatic fibrosis/hepatic stellate cell activation [21], glucocorticoid receptor signaling, IL-8 signaling, and granulocyte adhesion and diapedesis. Upstream regulators of gene found solely by Method II included: IL1B (P=4.56E−13), TGFB1 (P=1.19E−11), and IFNG (P=1.82E−11). The IPA results both concur with current literature and offer new insights into the possible mechanism(s) of action of PGF in the corpus luteum [1, 9, 11, 21]. These and similar canonical and regulatory functions were also identified when the complete dataset (30 min, 1 h, 2 h, and 4 h) was analyzed by IPA. These network functions are in agreement with the known or suspected changes in biological function in the corpus luteum following PGF treatment in several species [1, 5, 22, 27]. Several of the genes identified by Methods I and II are known to be involved in regulation of the fate of the corpus luteum after PGF treatment, and were also identified as DE genes in our larger data set and a similar microarray dataset examining the effects of PGF in the cow [22]. For example, genes that code for chemokines (e.g., CCL3 and CCL8) and prostaglandin synthesis (e.g., PTGS2) were found to be significantly up-regulated at 1 and 2 h using Methods I and II which were not identified using LIMMA. However, CCL3, CCL8, and PTGS2 were all identified as significantly up-regulated in later time points using LIMMA, which conservatively identifies DE genes. Therefore, it seems possible that Methods I and II may provide a more sensitive approach to identify the temporal patterns of gene expression. In this paper, we have proposed a new differentially expressed gene selection algorithm, which controls the FDR based on predictive Bayesian estimates. The simulation studies empirically showed that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. For the analysis of the real data, the method II successfully identified more clinically important DE genes than the other methods. In comparison to Method I, the Method II provides a much better sensitivity rate, but slightly a lower specificity rate based on the simulation studies. In scenarios where the data are not symmetrically distributed, we need to model the data with other types of distributions (e.g., a gamma distribution). The confident difference criterion method proposed for comparing both means and variances can also be extended to evaluate the differences in multiple parameters defined in the non-normal data distributions. In addition, the Euclidean distances used in the proposed confident difference criterion method may also be extended to other types of distances to measure the difference among the distributions under two or more biological conditions. In the case of two conditions, the entropy-based distance such as the Kullback-Leibler (KL) divergence may be considered. However, the distribution of the entropy-based statistics is quite difficult to characterize and, hence, it is quite challenging to choose the cutoff value for the entropy statistics. Such extensions need to be further investigated. Finally, we note that all models considered in this paper assume that the gene expressions are independent across genes. The proposed confident difference criterion methods do not require the independence assumption. However, the performance of the confident difference criterion methods under the correlated models need to be further examined. All analyses results presented in this paper were obtained using codes developed in FORTRAN with IMSL library. We have also implemented the proposed method in R for windows (32 bits). The R codes can be obtained at the websites: http://www.unmc.edu/publichealth/departments/biostatistics/facultyandstaff/cdc_micro.zipand http://www.unmc.edu/publichealth/departments/biostatistics/facultyandstaff/cdc_RNASeq.zip. Atli MO, Bender RW, Mehta V, Bastos MR, Luo W, Vezina CM, et al. Patterns of gene expression in the bovine corpus luteum following repeated intrauterine infusions of low doses of prostaglandin F2 α. Biol Reprod. 2012; 86(4):130. Anders S, Huber W. Differential expression analysis for sequence count data. Genome Biol. 2010; 11:R106. Auer PL, Doerge RW. A two-stage poisson model for testing RNA-Seq data. Stat Appl Genet Mol Biol. 2011; 10:1–26. Bentley DR, Balasubramanian S, Swerdlow HP, Smith GP, Milton J, Brown CG, et al. Accurate whole human genome sequencing using reversible terminator chemistry. Nature. 2008; 456(7218):53–9. Bishop CV, Bogan RL, Hennebold JD, Stouffer RL. Analysis of microarray data from the macaque corpus luteum; the search for common themes in primate luteal regression. Mol Hum Reprod. 2011; 17(3):143–51. Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008; 138:387–404. Dudroit S, Yang YH, Callow MJ, Speed TP. Statistical methods for identifying differentially expressed genes in replicated cDNA microarray experiments. Statistica Sinica. 2002; 12:111–39. Di Y, Schafer DW, Cumbie JS, Chang JH. The NBP negative binomial model for assessing differential gene expression from RNA-Seq. Stat Appl Genet Mol Biol. 2011; 10(1):1–28. Galväo AM, Ferreira-Dias G, Skarzynski DJ. Cytokines and angiogenesis in the corpus luteum. Mediators Inflamm. 2013; 2013:420186. Hardcastle TJ, baySeq KellyKA. Empirical Bayesian analysis of patterns of differential expression in count data. BMC Bioinformatics. 2010; 11:422–35. Hou X, Arvisais EW, Jiang C, Chen DB, Roy SK, Pate JL, et al. Prostaglandin F2 α stimulates the expression and secretion of transforming growth factor B1 via induction of the early growth response 1 gene (EGR1) in the bovine corpus luteum. Mol Endocrinol. 2008; 22(2):403–414. Irizarry RA, Hobbs B, Collin F, Beazer-Barclay YD, Antonellis KJ, Scherf U, et al. Exploration, normalization, and summaries of high density oligonucleotide array probe level data. Biostatistics. 2003; 4(2):249–64. Ibrahim JG, Chen M-H, Gray RJ. Bayesian models for gene expression with DNA microarray data. J Am Stat Assoc. 2002; 97:88–99. Kendziorski CM, Newton MA, Lan H, Gould MN. On parametric empirical Bayes methods for comparing multiple groups using replicated gene expression profiles. Stat Med. 2003; 22:3899–914. Kuo L, Yu F, Zhao Y. Statistical methods for identifying differentially expressed genes in replicated experiments: A review. In: Biswas A, Data S, Fine J, Segal M, editors. Statistical Advances in the Biomedical Sciences: Clinical Trials, Epidemiology, Survival Analysis, and Bioinformatics. Hoboken, NJ: Wiley-Interscience: 2008. p. 341–64. Kvam VM, Liu P, Si Y. A comparison of statistical methods for detecting differentially expressed genes from RNA-seq data. Am J Bot. 2012; 99(2):248–56. Leng N, Dawson JA, Stewart RM, Ruotti V, Rissman A, Smits B, et al. EBseq: An empirical Bayes hierarchical model for inference in RNA-seq experiments. Bioinformatics. 2013; 29(8):1035–43. Ley TJ, Mardis ER, Ding L, Fulton B, McLellan MD, Chen K, et al. DNA sequencing of a cytogenetically normal acute myeloid leukaemia genome. Nature. 2008; 456(7218):66–72. Li J, Tibshirani R. Finding consistent patterns: a nonparametric approach for identifying differential expression in RNA-seq data. Stat Methods Med Res. 2013; 22:519–36. Lu J, Tomfohr JK, Kepler TB. Identifying differential expression in multiple SAGE libraries: an overdispersed log-linear model approach. BMC Bioinformatics. 2005; 6:165. Maroni D, Davis JS. TGFB1 disrupts the angiogenic potential of microvascular endothelial cells of the corpus luteum. J Cell Sci. 2012; 124(14):2501–510. Mondal M, Schilling B, Folger J, Steibel JP, Buchnick H, Zalman Y, et al. Deciphering the luteal transcriptome: potential mechanisms mediating stage-specific luteolytic response of the corpus luteum to prostaglandin F 2 α. Physiol Genomics. 2011; 43(8):447–56. Newton MA, Noueiry A, Sarkar D, Ahlquist P. Detecting differential gene expression with a semiparametric hierarchical mixture method. Biostatistics. 2004; 5:155–76. Okuda K, Sakumoto R. Multiple roles of TNF super family members in corpus luteum function. Reprod Biol Endocrinol. 2003; 1:95. Pan W. A comparative review of statistical methods for discovering differentially expressed genes in replicated microarray experiments. Bioinformatics. 2002; 18:546–54. Robinson MD, McCarthy DJ. Smyth GK. edgeR: a Bioconductor package for differential expression analysis of digital gene expression data. Bioinformatics. 2010; 26:139–40. Romero JJ, Antoniazzi AQ, Smirnova NP, Webb BT, Yu F, Davis JS, et al. Pregnancy-associated genes contribute to antiluteolytic mechanisms in ovine corpus luteum. Physiol Genomics. 2013; 45(22):1095–1108. Smyth GK. Linear models and empirical Bayes methods for assessing differential expression in microarray experiments. Stat Appl Genet Mol Biol. 2004; 3(1):Article 3. Soneson C, Delorenzi M. A comparison of methods for differential expression analysis of RNA-Seq data. BMC Bioinformatics. 2013; 14:91. Storey JD. A direct approach to false discovery rates. J R Stat Soc Ser B. 2002; 64:479–98. Tadesse MG, Ibrahim JG, Vannucci M, Gentleman R. Wavelet thresholding with Bayesian false discovery rate control. Biometrics. 2005; 61:25–35. Tarazona S, García-Alcalde F, Dopazo J, Ferrer A, Conesa A. Differential expression in RNA-Seq: a matter of depth. Genome Res. 2011; 21:2213–223. Tusher VG, Ti bshirani R, Chu G. Significance analysis of microarrays applied to the ionizing radiation response. Proc Natl Acad Sci U S A. 2011; 98:5116–121. Wang J, Wang W, Li R, Li Y, Tian G, Goodman L, et al. The diploid genome sequence of an Asian individual. Nature. 2008; 456:60–65. Wilson EB, Hilferty MM. The distribution of chi-square. Proc Natl Acad Sci U S A. 1931; 17:684–88. Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using calibrated Bayes factors. Statistica Sinica. 2008; 18:783–802. Zalman Y, Klipper E, Farberov S, Mondal M, Wee G, Folger JK. Regulation of Angiogenesis-Related Prostaglandin F2alpha-Induced Genes in the Bovine Corpus Luteum. Biology of Reproduction. 2012; 86(3):92. We would like to thank the Editor and two referees for their very helpful comments and suggestions, which have led to an improved version of the paper. This work was supported by COPH Dean's Mentored Research Grant at UNMC (FY), NIH grants #GM 70335 and #P01 CA142538 (MHC), Agriculture and Food Research Initiative (AFRI) Competitive Grant no. 2011-67015-20076 from the USDA National Institute of Food and Agriculture (NIFA) (JSD); the Department of Veterans Affairs (JSD), the Olson Center for Women's Health (HT and JSD), and a AFRI NIFA Predoctoral Fellowship Award (HT). Department of Biostatistics, University of Nebraska Medical Center, Omaha, 68198-4350, NE, USA Fang Yu Department of Statistics, University of Connecticut, Storrs, 06269-4120, CT, USA Ming-Hui Chen & Lynn Kuo Department of Biochemistry and Molecular Biology and Department of Obstetrics and Gynecology, University of Nebraska Medical Center, Omaha, 68198-5870, NE, USA Heather Talbott VA Nebraska-Western Iowa Health Care System and Department of Obstetrics and Gynecology, University of Nebraska Medical Center, Omaha, 68198-3255, NE, USA John S. Davis Search for Fang Yu in: Search for Ming-Hui Chen in: Search for Lynn Kuo in: Search for Heather Talbott in: Search for John S. Davis in: Correspondence to Fang Yu. FY, MHC and LK developed the method, and carried out the simulation and real data analysis. HT and JD provided the real data and conducted the Ingenuity pathway analysis. All authors contributed to the writing, proof reading and approval of the final manuscript. Additional file 1 Methods. Mathematical Proof for Propositions 1 and 2. Real Data Analysis Results. Canonical Pathways identified by Methods I and II. Yu, F., Chen, M., Kuo, L. et al. Confident difference criterion: a new Bayesian differentially expressed gene selection algorithm with applications. BMC Bioinformatics 16, 245 (2015) doi:10.1186/s12859-015-0664-3 Differential expression
CommonCrawl
Invariant stable manifolds for partial neutral functional differential equations in admissible spaces on a half-line Kurzweil integral representation of interacting Prandtl-Ishlinskii operators November 2015, 20(9): 2967-2992. doi: 10.3934/dcdsb.2015.20.2967 Shape stability of optimal control problems in coefficients for coupled system of Hammerstein type Olha P. Kupenko 1, and Rosanna Manzo 2, Dnipropetrovsk Mining University, Department of System Analysis and Control, Karl Marks av., 19, 49005 Dnipropetrovsk, Ukraine Università degli Studi di Salerno, Dipartimento di Ingegneria dell'Informazione, Ingegneria Elettrica e Matematica Applicata, Via Giovanni Paolo II, 132, 84084 Fisciano (SA), Italy Received April 2014 Revised October 2014 Published September 2015 In this paper we consider an optimal control problem (OCP) for the coupled system of a nonlinear monotone Dirichlet problem with matrix-valued $L^\infty(\Omega;\mathbb{R}^{N\times N} )$-controls in coefficients and a nonlinear equation of Hammerstein type. Since problems of this type have no solutions in general, we make a special assumption on the coefficients of the state equation and introduce the class of so-called solenoidal admissible controls. Using the direct method in calculus of variations, we prove the existence of an optimal control. We also study the stability of the optimal control problem with respect to the domain perturbation. In particular, we derive the sufficient conditions of the Mosco-stability for the given class of OCPs. Keywords: control in coefficients, Nonlinear monotone Dirichlet problem, domain perturbation., equation of Hammerstein type. Mathematics Subject Classification: Primary: 49J20, 35J57; Secondary: 49J45, 93C7. Citation: Olha P. Kupenko, Rosanna Manzo. Shape stability of optimal control problems in coefficients for coupled system of Hammerstein type. Discrete & Continuous Dynamical Systems - B, 2015, 20 (9) : 2967-2992. doi: 10.3934/dcdsb.2015.20.2967 D. E. Akbarov, V. S. Melnik and V. V. Jasinskiy, Coupled Systems Control Methods, Viriy, Kyiv, 1998 (in Russian). Google Scholar G. Allaire, Shape Optimization by the Homogenization Method, Applied Mathematical Sciences, vol. 146, Springer, New York, 2002. doi: 10.1007/978-1-4684-9286-6. Google Scholar T. Bagby, Quasi topologies and rational approximation, J. Func. Anal., 10 (1972), 259-268. doi: 10.1016/0022-1236(72)90025-0. Google Scholar D. Bucur and G. Buttazzo, Variational Methods in Shape Optimization Problems, Birkhäuser, Boston: in Progress in Nonlinear Differential Equations and their Applications, Vol. 65, 2005. Google Scholar D. Bucur and P. Trebeschi, Shape optimization problems governed by nonlinear state equations, Proc. Roy. Soc. Edinburgh, Ser. A, 128 (1998), 943-963. doi: 10.1017/S0308210500030006. Google Scholar D. Bucur and J. P. Zolésio, $N$-Dimensional Shape Optimization under Capacitary Constraints, J. Differential Equations, 123 (1995), 504-522. doi: 10.1006/jdeq.1995.1171. Google Scholar G. Buttazzo and G. Dal Maso, Shape optimization for Dirichlet problems. Relaxed SIS and optimally conditions, Appl. Math. Optim., 23 (1991), 17-49. doi: 10.1007/BF01442391. Google Scholar C. Calvo-Jurado and J. Casado-Diaz, Results on existence of solution for an optimal design problem, Extracta Mathematicae, 18 (2003), 263-271. Google Scholar G. Dal Maso and F. Murat, Asymptotic behaviour and correctors for Dirichlet problem in perforated domains with homogeneous monotone operators, Ann. Scuola Norm. Sup. Pisa Cl.Sci., 24 (1997), 239-290. Google Scholar G. Dal Maso, F. Ebobisse and M. Ponsiglione, A stability result for nonlinear Neumann problems under boundary variations, J. Math. Pures Appl., 82 (2003), 503-532. doi: 10.1016/S0021-7824(03)00014-X. Google Scholar E. N. Dancer, The effect of domains shape on the number of positive solutions of certain nonlinear equations, J. Diff. Equations, 87 (1990), 316-339. doi: 10.1016/0022-0396(90)90005-A. Google Scholar D. Daners, Domain perturbation for linear and nonlinear parabolic equations, J. Diff. Equations, 129 (1996), 358-402. doi: 10.1006/jdeq.1996.0122. Google Scholar C. D'Apice, U. De Maio and O. P. Kogut, On shape stability of Dirichlet optimal control problems in coefficients for nonlinear elliptic equations, Advances in Differential Equations, 15 (2010), 689-720. Google Scholar C. D'Apice, U. De Maio and O. P. Kogut, Optimal control problems in coefficients for degenerate equations of monotone type: shape stability and attainability problems, SIAM Journal on Control and Optimization, 50 (2012), 1174-1199. doi: 10.1137/100815761. Google Scholar C. D'Apice, U. De Maio and P. I. Kogut, Suboptimal boundary control for elliptic equations in critically perforated domains, Ann. Inst. H. Poincaré Anal. Non Line'aire, 25 (2008), 1073-1101. doi: 10.1016/j.anihpc.2007.07.001. Google Scholar L. C. Evans and R. F. Gariepy, Measure Theory and Fine Properties of Functions, CRC Press, Boca Raton, 1992. Google Scholar K. J. Falconer, The Geometry of Fractal Sets, Cambridge University Press, Cambridge, 1986. Google Scholar H. Gajewski, K. Gröger and K. Zacharias, Nichtlineare Operatorgleichungen und Operatordifferentialgleichungen, Academie-Varlar, Berlin, 1974. Google Scholar J. Haslinger and P. Neittaanmäki, Finite Element Approximation of Optimal Shape. Material and Topology Design, John Wiley and Sons, Chichester, 1996. Google Scholar J. Heinonen, T. Kilpelainen and O. Martio, Nonlinear Potential Theory of Degenerate Elliptic Equations, Unabridged republication of the 1993 original. Dover Publications, Inc., Mineola, NY, 2006. Google Scholar V. I. Ivanenko and V. S. Mel'nik, Varational Metods in Control Problems for Systems with Distributed Parameters, Naukova Dumka, Kiev, 1988 (in Russian). Google Scholar O. P. Kogut, Qualitative Analysis of one Class of Optimization Problems for Nonlinear Elliptic Operators, PhD thesis at Gluskov Institute of Cyberentics NAS Kiev, 2010. Google Scholar P. I. Kogut and G. Leugering, Optimal Control Problems for Partial Differential Equations on Reticulated Domains, Series: Systems and Control, Birkhäuser Verlag, 2011. doi: 10.1007/978-0-8176-8149-4. Google Scholar O. P. Kupenko, Optimal control problems in coefficients for degenerate variational inequalities of monotone type.I Existence of solutions, Journal of Computational & Applied Mathematics, 106 (2011), 88-104. Google Scholar I. Lasiecka, NSF-CMBS Lecture Notes: Mathematical Control Theory of Coupled Systems of Partial Differential Equations, SIAM, 2002. Google Scholar J.-L. Lions, Optimal Control of Systems Governed by Partial Differential Equations, Springer Verlag, New York, 1971. Google Scholar K. A. Lurie, Applied Optimal Control Theory of Distributed Systems, Plenum Press, NewYork, 1993. doi: 10.1007/978-1-4757-9262-1. Google Scholar U. Mosco, Convergence of convex sets and of solutions of variational inequalities, Adv. Math., 3 (1969), 510-585. doi: 10.1016/0001-8708(69)90009-7. Google Scholar F. Murat, Un contre-exemple pour le probleme du controle dans les coefficients, C. R. Acad. Sci. Paris Ser. A-B, 273 (1971), A708-A711. Google Scholar F. Murat and L. Tartar, H-convergence. Topics in the mathematical modelling of composite materials, Progr. Nonlinear Differential Equations Appl., Birkhäuser Boston, Boston, MA, 31 (1997), 21-43. Google Scholar O. Pironneau, Optimal Shape Design for Elliptic Systems, Springer-Verlag, Berlin, 1984. doi: 10.1007/978-3-642-87722-3. Google Scholar U. Ë. Raytum, Optimal Control Problems for Elliptic Equations, Zinatne, Riga, 1989 (in Russian). Google Scholar J. Sokolowski and J. P. Zolesio, Introduction to Shape Optimization, Springer-Verlag, Berlin, 1992. doi: 10.1007/978-3-642-58106-9. Google Scholar D. Tiba, Lectures on the Control of Elliptic Systems, in: Lecture Notes, 32, Department of Mathematics, University of Jyväskylä, Finland, 1995. Google Scholar M. M. Vainberg and I. M. Lavrentieff, Nonlinear equations of hammerstein type with potential and monotone operators in banach spaces, Matematicheskij Sbornik, no. 3, 87 (1972), 324-337 (in Russian). Google Scholar M. Z. Zgurovski and V. S. Mel'nik, Nonlinear Analysis and Control of Physical Processes and Fields, Springer-Verlag, Berlin, 2004. doi: 10.1007/978-3-642-18770-4. Google Scholar M. Z. Zgurovski, V. S. Mel'nik and A. N. Novikov, Applied Methods for Analysis and Control of Nonlinear Processes and Fields, Naukova Dumka, Kiev, 2004 (in Russian). Google Scholar W. P. Ziemer, Weakly Differentiable Functions, Springer-Verlag, Berlin, 1989. doi: 10.1007/978-1-4612-1015-3. Google Scholar Gabriella Zecca. An optimal control problem for some nonlinear elliptic equations with unbounded coefficients. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1393-1409. doi: 10.3934/dcdsb.2019021 Peter I. Kogut. On approximation of an optimal boundary control problem for linear elliptic equation with unbounded coefficients. Discrete & Continuous Dynamical Systems, 2014, 34 (5) : 2105-2133. doi: 10.3934/dcds.2014.34.2105 Khadijah Sharaf. A perturbation result for a critical elliptic equation with zero Dirichlet boundary condition. Discrete & Continuous Dynamical Systems, 2017, 37 (3) : 1691-1706. doi: 10.3934/dcds.2017070 Peter I. Kogut, Olha P. Kupenko. On optimal control problem for an ill-posed strongly nonlinear elliptic equation with $p$-Laplace operator and $L^1$-type of nonlinearity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (3) : 1273-1295. doi: 10.3934/dcdsb.2019016 Thierry Horsin, Peter I. Kogut, Olivier Wilk. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. II. Approximation of solutions and optimality conditions. Mathematical Control & Related Fields, 2016, 6 (4) : 595-628. doi: 10.3934/mcrf.2016017 Haiyang Wang, Zhen Wu. Time-inconsistent optimal control problem with random coefficients and stochastic equilibrium HJB equation. Mathematical Control & Related Fields, 2015, 5 (3) : 651-678. doi: 10.3934/mcrf.2015.5.651 Thierry Horsin, Peter I. Kogut. Optimal $L^2$-control problem in coefficients for a linear elliptic equation. I. Existence result. Mathematical Control & Related Fields, 2015, 5 (1) : 73-96. doi: 10.3934/mcrf.2015.5.73 Isabeau Birindelli, Francoise Demengel. The dirichlet problem for singluar fully nonlinear operators. Conference Publications, 2007, 2007 (Special) : 110-121. doi: 10.3934/proc.2007.2007.110 A. Rodríguez-Bernal. Perturbation of the exponential type of linear nonautonomous parabolic equations and applications to nonlinear equations. Discrete & Continuous Dynamical Systems, 2009, 25 (3) : 1003-1032. doi: 10.3934/dcds.2009.25.1003 Patrick Henning, Mario Ohlberger. Error control and adaptivity for heterogeneous multiscale approximations of nonlinear monotone problems. Discrete & Continuous Dynamical Systems - S, 2015, 8 (1) : 119-150. doi: 10.3934/dcdss.2015.8.119 Bo Guan, Heming Jiao. The Dirichlet problem for Hessian type elliptic equations on Riemannian manifolds. Discrete & Continuous Dynamical Systems, 2016, 36 (2) : 701-714. doi: 10.3934/dcds.2016.36.701 Piotr Kowalski. The existence of a solution for Dirichlet boundary value problem for a Duffing type differential inclusion. Discrete & Continuous Dynamical Systems - B, 2014, 19 (8) : 2569-2580. doi: 10.3934/dcdsb.2014.19.2569 Jan Andres, Luisa Malaguti, Martina Pavlačková. Hartman-type conditions for multivalued Dirichlet problem in abstract spaces. Conference Publications, 2015, 2015 (special) : 38-55. doi: 10.3934/proc.2015.0038 S.V. Zelik. The attractor for a nonlinear hyperbolic equation in the unbounded domain. Discrete & Continuous Dynamical Systems, 2001, 7 (3) : 593-641. doi: 10.3934/dcds.2001.7.593 Kunquan Lan, Wei Lin. Lyapunov type inequalities for Hammerstein integral equations and applications to population dynamics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1943-1960. doi: 10.3934/dcdsb.2018256 Xiaorui Wang, Genqi Xu. Uniform stabilization of a wave equation with partial Dirichlet delayed control. Evolution Equations & Control Theory, 2020, 9 (2) : 509-533. doi: 10.3934/eect.2020022 Chunhui Qiu, Rirong Yuan. On the Dirichlet problem for fully nonlinear elliptic equations on annuli of metric cones. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5707-5730. doi: 10.3934/dcds.2017247 Martino Bardi, Paola Mannucci. On the Dirichlet problem for non-totally degenerate fully nonlinear elliptic equations. Communications on Pure & Applied Analysis, 2006, 5 (4) : 709-731. doi: 10.3934/cpaa.2006.5.709 Takeshi Ohtsuka, Ken Shirakawa, Noriaki Yamazaki. Optimal control problem for Allen-Cahn type equation associated with total variation energy. Discrete & Continuous Dynamical Systems - S, 2012, 5 (1) : 159-181. doi: 10.3934/dcdss.2012.5.159 Minzilia A. Sagadeeva, Sophiya A. Zagrebina, Natalia A. Manakova. Optimal control of solutions of a multipoint initial-final problem for non-autonomous evolutionary Sobolev type equation. Evolution Equations & Control Theory, 2019, 8 (3) : 473-488. doi: 10.3934/eect.2019023 Olha P. Kupenko Rosanna Manzo
CommonCrawl
Computer Science > Cryptography and Security [Submitted on 24 May 2019 (v1), last revised 15 Feb 2021 (this version, v4)] Title:Quantum Period Finding is Compression Robust Authors:Alexander May, Lars Schlieper Abstract: We study quantum period finding algorithms such as Simon and Shor (and its variants Ekerå-Håstad and Mosca-Ekert). For a periodic function $f$ these algorithms produce -- via some quantum embedding of $f$ -- a quantum superposition $\sum_x |x\rangle|f(x)\rangle$, which requires a certain amount of output qubits that represent $|f(x)\rangle$. We show that one can lower this amount to a single output qubit by hashing $f$ down to a single bit in an oracle setting. Namely, we replace the embedding of $f$ in quantum period finding circuits by oracle access to several embeddings of hashed versions of $f$. We show that on expectation this modification only doubles the required amount of quantum measurements, while significantly reducing the total number of qubits. For example, for Simon's algorithm that finds periods in $f: \mathbb{F}_2^n \rightarrow \mathbb{F}_2^n$ our hashing technique reduces the required output qubits from $n$ down to $1$, and therefore the total amount of qubits from $2n$ to $n+1$. We also show that Simon's algorithm admits real world applications with only $n+1$ qubits by giving a concrete realization of a hashed version of the cryptographic Even-Mansour construction. Moreover, for a variant of Simon's algorithm on Even-Mansour that requires only classical queries to Even-Mansour we save a factor of (roughly) $4$ in the qubits. Our oracle-based hashed version of the Ekerå-Håstad algorithm for factoring $n$-bit RSA reduces the required qubits from $(\frac 3 2 + o(1))n$ down to $(\frac 1 2 + o(1))n$. We also show a real-world (non-oracle) application in the discrete logarithm setting by giving a concrete realization of a hashed version of Mosca-Ekert for the Decisional Diffie Hellman problem in $\mathbb{F}_{p^m}$, thereby reducing the number of qubits by even a linear factor from $m \log p$ downto $\log p$. Subjects: Cryptography and Security (cs.CR); Quantum Physics (quant-ph) Cite as: arXiv:1905.10074 [cs.CR] (or arXiv:1905.10074v4 [cs.CR] for this version) From: Lars Schlieper [view email] [v1] Fri, 24 May 2019 07:35:04 UTC (450 KB) [v2] Mon, 4 Nov 2019 19:35:58 UTC (60 KB) [v3] Mon, 9 Nov 2020 12:47:52 UTC (208 KB) [v4] Mon, 15 Feb 2021 15:21:11 UTC (209 KB) Lars Schlieper
CommonCrawl
Home » Statistics » Pearson Correlation Coefficient Calculator Examples Pearson Correlation Coefficient Calculator Examples by Dr. Raju Chaudhari Pearson's Correlation Coefficient Calculator with Examples 1 Karl Pearson's Correlation Coefficient 1.1 $r = \dfrac{Cov(X,Y)}{\sqrt{Var(X) Var(Y)}}$ 3 Karl Pearson's Correlation Coefficient Calculator 4 How to calculate Pearson's Correlation Coefficient? 5 Pearson's Correlation Coefficient Example 1 Karl Pearson's Correlation Coefficient Let $(x_i, y_i), i=1,2, \cdots , n$ be $n$ pairs of observations then the Karl Pearson's coefficient of correlation between two variables $X$ and $Y$ is denoted by $r_{xy}$ or $r$ and is given by $r = \dfrac{Cov(X,Y)}{\sqrt{Var(X) Var(Y)}}$ the sample covariance between $x$ and $y$ is $$ \begin{aligned} Cov(x,y) =s_{xy}&=\frac{1}{n-1}\sum_{i=1}^{n}(x_i -\overline{x})(y_i-\overline{y})\\ &= \frac{1}{n-1}\bigg(\sum_{i=1}^n x_iy_i - \frac{(\sum_{i=1}^n x_i)(\sum_{i=1}^n y_i)}{n}\bigg) \end{aligned} $$ the sample variance of $x$ is $$ \begin{aligned} V(x) =s_{x}^2 &=\frac{1}{n-1}\sum_{i=1}^{n}(x_i -\overline{x})^2\\ &= \frac{1}{n-1}\bigg(\sum_{i=1}^n x_i^2 - \frac{(\sum_{i=1}^n x_i)^2}{n}\bigg) \end{aligned} $$ the sample variance of $y$ is $$ \begin{aligned} V(y) =s_{y}^2 &=\frac{1}{n-1}\sum_{i=1}^{n}(y_i -\overline{y})^2\\ &= \frac{1}{n-1}\bigg(\sum_{i=1}^n y_i^2 - \frac{(\sum_{i=1}^n y_i)^2}{n}\bigg) \end{aligned} $$ the sample mean of $x$ is $$ \begin{aligned} \overline{x}&=\frac{1}{n}\sum_{i=1}^n x_i \end{aligned} $$ the sample mean of $y$ is $$ \begin{aligned} \overline{y}&=\frac{1}{n}\sum_{i=1}^n y_i \end{aligned} $$ Thus, $$ \begin{aligned} r_{xy}&=\dfrac{s_{xy}}{s_x\cdot s_y} \end{aligned} $$ The correlation coefficient $r$ can not exceed unity numerically. i.e. $|r|\leq 1 \implies -1 \leq r \leq +1$. Two independent variables are uncorrelated. But the converse is not necessarily true. If $r = 0$, then there is no correlation between the ranks. If $r > 0$, then there is a positive correlation between the ranks. If $r = 1$, then there is a perfect positive correlation between the ranks. If $0 < r < 1$, then there is a partially positive correlation between the ranks. If $r < 0$, then there is a negative correlation between the ranks. If $r = -1$, then there is a perfect negative correlation between the ranks. If $-1 < r < 0$, then there is a partially negative correlation between the ranks. Karl Pearson's Correlation Coefficient Calculator Use this calculator to calculate the Karl Pearson's correlation coefficient. Pearson's Correlation Coefficient Calculator Data 1 : X Data 2 : Y Enter Data (Separated by comma ,) Number of Observations (n): Variance of X: Variance of Y: Covariance between X and Y: Pearson's Coefficient of Correlation: $r$ Coefficient of Determination: $r^2$ How to calculate Pearson's Correlation Coefficient? Step 1 – Enter the $X$ values separated by commas Step 2 – Enter the $Y$ values separated by commas Step 3 – Click calculate button to calculate correlation coefficient Step 4 – Gives the number of pairs of observations Step 5 – Gives the sample variance of $X$ Step 6 – Gives the sample variance of $Y$ Step 7 – Gives the sample covariance between $X$ and $Y$ Step 8 – Gives the sample Pearson's correlation coefficient and coefficient of determination. Pearson's Correlation Coefficient Example 1 A study was conducted to analyze the relationship between advertising expenditure and sales. The following data were recorded: X Advertising (in \$) Y Sales (in \$) 310 340 400 420 490 Compute the correlation coefficient between advertising expenditure and sales. Let $x$ denote the advertising expenditure and $y$ denote the sales. $x$ $y$ $x^2$ $y^2$ $xy$ 1 20 310 400 96100 6200 2 24 340 576 115600 8160 3 30 400 900 160000 12000 4 32 420 1024 176400 13440 Total 141 1960 4125 788200 56950 $$ \begin{aligned} s_{x}^2 & = \frac{1}{n-1}\bigg(\sum x^2 - \frac{(\sum x)^2}{n}\bigg)\\ & = \frac{1}{5-1}\bigg(4125-\frac{(141)^2}{5}\bigg)\\ &= \frac{1}{4}\bigg(4125-\frac{19881}{5}\bigg)\\ &= \frac{1}{4}\bigg(4125-3976.2\bigg)\\ &= \frac{148.8}{4}\\ &= 37.2. \end{aligned} $$ $$ \begin{aligned} s_{y}^2 & = \frac{1}{n-1}\bigg(\sum y^2 - \frac{(\sum y)^2}{n}\bigg)\\ & = \frac{1}{5-1}\bigg(788200-\frac{(1960)^2}{5}\bigg)\\ &= \frac{1}{4}\bigg(788200-\frac{3841600}{5}\bigg)\\ &= \frac{1}{4}\bigg(788200-768320\bigg)\\ &= \frac{19880}{4}\\ &= 4970. \end{aligned} $$ $$ \begin{aligned} s_{xy} & = \frac{1}{n-1}\bigg(\sum xy - \frac{(\sum x)(\sum y)}{n}\bigg)\\ & = \frac{1}{5-1}\bigg(56950-\frac{(141)(1960)}{5}\bigg)\\ &= \frac{1}{4}\bigg(56950-\frac{276360}{5}\bigg)\\ &= \frac{1}{4}\bigg(56950-55272\bigg)\\ &= \frac{1678}{4}\\ &= 419.5. \end{aligned} $$ The Karl Pearson's sample correlation coefficient between advertising expenditure and sales is $$ \begin{aligned} r_{xy} & = \frac{Cov(x,y)}{\sqrt{V(x) V(y)}}\\ &= \frac{s_{xy}}{\sqrt{s_x^2s_y^2}}\\ &=\frac{419.5}{\sqrt{37.2\times 4970}}\\ &=\frac{419.5}{\sqrt{184884}}\\ &=0.9756. \end{aligned} $$ The correlation coefficient between advertising expenditure and sales is $0.9756$. Since the value of correlation coefficient is positive, there is a strong positive relationship between advertising expenditure and sales. A study of the amount of rainfall and the quantity of air pollution removed produced the following data: Daily Rainfall (0.01cm) Particulate Removed ($\mu g/m^3$) 126 121 116 118 114 118 132 141 108 Calculate correlation coefficient between daily rainfall and particulate removed, Let $x$ denote the daily rainfall (0.01 cm) and $y$ denote the particulate removed ($\mu g/m^3$). Let $x$ denote the daily rainfall and $y$ denote the particulate removed. 1 4.3 126 18.49 15876 541.8 8 2.1 141 4.41 19881 296.1 Total 45.0 1094 244.26 133786 5348.2 $$ \begin{aligned} s_{x}^2 & = \frac{1}{n-1}\bigg(\sum x^2 - \frac{(\sum x)^2}{n}\bigg)\\ & = \frac{1}{9-1}\bigg(244.26-\frac{(45)^2}{9}\bigg)\\ &= \frac{1}{8}\bigg(244.26-\frac{2025}{9}\bigg)\\ &= \frac{1}{8}\bigg(244.26-225\bigg)\\ &= \frac{19.26}{8}\\ &= 2.4075. \end{aligned} $$ $$ \begin{aligned} s_{y}^2 & = \frac{1}{n-1}\bigg(\sum y^2 - \frac{(\sum y)^2}{n}\bigg)\\ & = \frac{1}{9-1}\bigg(133786-\frac{(1094)^2}{9}\bigg)\\ &= \frac{1}{8}\bigg(133786-\frac{1196836}{9}\bigg)\\ &= \frac{1}{8}\bigg(133786-132981.7778\bigg)\\ &= \frac{804.2222}{8}\\ &= 100.5278. \end{aligned} $$ $$ \begin{aligned} s_{xy} & = \frac{1}{n-1}\bigg(\sum xy - \frac{(\sum x)(\sum y)}{n}\bigg)\\ & = \frac{1}{9-1}\bigg(5348.2-\frac{(45)(1094)}{9}\bigg)\\ &= \frac{1}{8}\bigg(5348.2-\frac{49230}{9}\bigg)\\ &= \frac{1}{8}\bigg(5348.2-5470\bigg)\\ &= \frac{-121.8}{8}\\ &= -15.225. \end{aligned} $$ The Karl Pearson's sample correlation coefficient between daily rainfall and particulate removed is $$ \begin{aligned} r_{xy} & = \frac{Cov(x,y)}{\sqrt{V(x) V(y)}}\\ &= \frac{s_{xy}}{\sqrt{s_x^2s_y^2}}\\ &=\frac{-15.225}{\sqrt{2.4075\times 100.5278}}\\ &=\frac{-15.225}{\sqrt{242.0207}}\\ &=-0.9787. \end{aligned} $$ The correlation coefficient between daily rainfall and particulate removed is $-0.9787$. Since the value of correlation coefficient is negative, there is a strong negative relationship between daily rainfall and particulate removed. The number of hours 14 students spent studying for a test and their test on that scores are recorded as follows: Hours spent ($x$) Test Scores ($Y$) Calculate correlation coefficient between hours spent and test scores. Let $x$ denote the number of hours hours spent studying and $y$ denote the test scores. Let $x$ denote the no. of hours spent studying for a test and $y$ denote the test scores. 1 1 41 1 1681 41 2 0 40 0 1600 0 5 2 52 4 2704 104 8 5 53 25 2809 265 10 6 70 36 4900 420 Total 56 828 312 53128 3879 $$ \begin{aligned} s_{x}^2 & = \frac{1}{n-1}\bigg(\sum x^2 - \frac{(\sum x)^2}{n}\bigg)\\ & = \frac{1}{14-1}\bigg(312-\frac{(56)^2}{14}\bigg)\\ &= \frac{1}{13}\bigg(312-\frac{3136}{14}\bigg)\\ &= \frac{1}{13}\bigg(312-224\bigg)\\ &= \frac{88}{13}\\ &= 6.7692. \end{aligned} $$ $$ \begin{aligned} s_{y}^2 & = \frac{1}{n-1}\bigg(\sum y^2 - \frac{(\sum y)^2}{n}\bigg)\\ & = \frac{1}{14-1}\bigg(53128-\frac{(828)^2}{14}\bigg)\\ &= \frac{1}{13}\bigg(53128-\frac{685584}{14}\bigg)\\ &= \frac{1}{13}\bigg(53128-48970.2857\bigg)\\ &= \frac{4157.7143}{13}\\ &= 319.8242. \end{aligned} $$ $$ \begin{aligned} s_{xy} & = \frac{1}{n-1}\bigg(\sum xy - \frac{(\sum x)(\sum y)}{n}\bigg)\\ & = \frac{1}{14-1}\bigg(3879-\frac{(56)(828)}{14}\bigg)\\ &= \frac{1}{13}\bigg(3879-\frac{46368}{14}\bigg)\\ &= \frac{1}{13}\bigg(3879-3312\bigg)\\ &= \frac{567}{13}\\ &= 43.6154. \end{aligned} $$ The Karl Pearson's sample correlation coefficient between no. of hours spent studying for a test and test scores is $$ \begin{aligned} r_{xy} & = \frac{Cov(x,y)}{\sqrt{V(x) V(y)}}\\ &= \frac{s_{xy}}{\sqrt{s_x^2s_y^2}}\\ &=\frac{43.6154}{\sqrt{6.7692\times 319.8242}}\\ &=\frac{43.6154}{\sqrt{2164.954}}\\ &=0.9374. \end{aligned} $$ The correlation coefficient between no. of hours spent studying for a test and test scores is $0.9374$. Since the value of correlation coefficient is positive, there is a strong positive relationship between no. of hours spent studying for a test and test scores. In this tutorial, you learned about the step by step procedure for calculating Pearson's correlation coefficient. You also learned about how to interpret the correlation coefficient. To learn more about other correlation and regression, please refer to the following tutorials: Correlation and Regression Let me know in the comments if you have any questions on Pearson's correlation coefficient calculator with examples and your thought on this article. Categories All Calculators, Correlation and Regression, Statistics, Statistics-Calc Tags correlation coefficient, correlation coefficient calculator, pearson's correlation coefficient Covariance Calculator between X and Y with examples Spearman's Rank Correlation Coefficient Calculator
CommonCrawl
All-----TitleAuthor(s)AbstractSubjectKeywordAll FieldsFull Text-----About Acta Mathematica Acta Math. Volume 200, Number 1 (2008), 85-153. Amalgamated free products of weakly rigid factors and calculation of their symmetry groups Adrian Ioana, Jesse Peterson, and Sorin Popa More by Adrian Ioana More by Jesse Peterson More by Sorin Popa Full-text: Open access PDF File (837 KB) Article info and citation We consider amalgamated free product II1 factors M = M1*BM2*B… and use "deformation/rigidity" and "intertwining" techniques to prove that any relatively rigid von Neumann subalgebra Q ⊂ M can be unitarily conjugated into one of the Mi's. We apply this to the case where the Mi's are w-rigid II1 factors, with B equal to either C, to a Cartan subalgebra A in Mi, or to a regular hyperfinite II1 subfactor R in Mi, to obtain the following type of unique decomposition results, àla Bass–Serre: If M = (N1 * CN2*C…)t, for some t > 0 and some other similar inclusions of algebras C ⊂ Ni then, after a permutation of indices, (B ⊂ Mi) is inner conjugate to (C ⊂ Ni)t, for all i. Taking B = C and $ M_{i} = {\left( {L{\left( {Z^{2} \rtimes F_{2} } \right)}} \right)}^{{t_{i} }} $, with {ti}i⩾1 = S a given countable subgroup of R+*, we obtain continuously many non-stably isomorphic factors M with fundamental group $ {\user1{\mathcal{F}}}{\left( M \right)} $ equal to S. For B = A, we obtain a new class of factors M with unique Cartan subalgebra decomposition, with a large subclass satisfying $ {\user1{\mathcal{F}}}{\left( M \right)} = {\left\{ 1 \right\}} $ and Out(M) abelian and calculable. Taking B = R, we get examples of factors with $ {\user1{\mathcal{F}}}{\left( M \right)} = {\left\{ 1 \right\}} $, Out(M) = K, for any given separable compact abelian group K. Acta Math., Volume 200, Number 1 (2008), 85-153. Received: 28 February 2006 Revised: 20 November 2007 First available in Project Euclid: 31 January 2017 Permanent link to this document https://projecteuclid.org/euclid.acta/1485891958 Digital Object Identifier Mathematical Reviews number (MathSciNet) MR2386109 Zentralblatt MATH identifier 2008 © Institut Mittag-Leffler Ioana, Adrian; Peterson, Jesse; Popa, Sorin. Amalgamated free products of weakly rigid factors and calculation of their symmetry groups. Acta Math. 200 (2008), no. 1, 85--153. doi:10.1007/s11511-008-0024-5. https://projecteuclid.org/euclid.acta/1485891958 B ekka, M. E. B. & V alette, A., Group cohomology, harmonic functions and the first L2-Betti number. Potential Anal., 6 (1997), 313–326. Zentralblatt MATH: 0882.22013 Digital Object Identifier: doi:10.1023/A:1017974406074 Mathematical Reviews (MathSciNet): MR1452785 B isch, D. & J ones, V., Algebras associated to intermediate subfactors. Invent. Math., 128 (1997), 89–157. Digital Object Identifier: doi:10.1007/s002220050137 B oca, F., Completely positive maps on amalgamated product C*-algebras. Math. Scand., 72 (1993), 212–222. B urger, M., Kazhdan constants for SL(3, Z). J. Reine Angew. Math., 413 (1991), 36–67. [Ch] C hoda, M., A continuum of nonconjugate property T actions of SL(n, Z) on the hyperfinite II1-factor. Math. Japon., 30 (1985), 133–150. Mathematical Reviews (MathSciNet): MR828913 [C1] C onnes, A., Outer conjugacy classes of automorphisms of factors. Ann. Sci. École Norm. Sup., 8 (1975), 383–419. — Sur la classification des facteurs de type II. C. R. Acad. Sci. Paris Sér. A-B, 281 (1975), 13–15. — Classification of injective factors. Cases II1, II∞, IIIλ, λ ≠ 1. Ann. of Math., 104 (1976), 73–115. Digital Object Identifier: doi:10.2307/1971057 — A factor of type II1 with countable fundamental group. J. Operator Theory, 4 (1980), 151–153. C onnes, A. & J ones, V., Property T for von Neumann algebras. Bull. London Math. Soc., 17 (1985), 57–62. Zentralblatt MATH: 05142191 Digital Object Identifier: doi:10.1112/blms/17.1.57 D ye, H. A., On groups of measure preserving transformations. II. Amer. J. Math., 85 (1963), 551–576. D ykema, K. J. & Rãdulescu, F., Compressions of free products of von Neumann algebras. Math. Ann., 316 (2000), 61–82. F eldman, J. & M oore, C. C., Ergodic equivalence relations, cohomology, and von Neumann algebras. I, II. Trans. Amer. Math. Soc., 234 (1977), 289–324, 325–359. F ernós, T., Relative property (T) and linear groups. Ann. Inst. Fourier (Grenoble), 56 (2006), 1767–1804. F urman, A., Orbit equivalence rigidity. Ann. of Math., 150 (1999), 1083–1108. Digital Object Identifier: doi:10.2307/121063 — Outer automorphism groups of some ergodic equivalence relations. Comment. Math. Helv., 80 (2005), 157–196. Digital Object Identifier: doi:10.4171/CMH/10 G aboriau, D., Invariants l2 de relations d'équivalence et de groupes. Publ. Math. Inst. Hautes Études Sci., 95 (2002), 93–150. — Examples of groups that are measure equivalent to the free group. Ergodic Theory Dynam. Systems, 25 (2005), 1809–1827. Digital Object Identifier: doi:10.1017/S0143385705000258 G aboriau, D. & P opa, S., An uncountable family of nonorbit equivalent actions of Fn. J. Amer. Math. Soc., 18 (2005), 547–559. Digital Object Identifier: doi:10.1090/S0894-0347-05-00480-7 G efter, S. L., Cohomology of the ergodic action of a T-group on the homogeneous space of a compact Lie group, in Operators in Function Spaces and Problems in Function Theory, pp. 77–83 (Russian). Naukova Dumka, Kiev, 1987. — Outer automorphism group of the ergodic equivalence relation generated by translations of dense subgroup of compact group on its homogeneous space. Publ. Res. Inst. Math. Sci., 32 (1996), 517–538. Digital Object Identifier: doi:10.2977/prims/1195162855 G olowin, O. N. & S yadowsky, L. E., Über die Automorphismengruppen der freien Produkte. Rec. Math. [Mat. Sbornik], 4 (46) (1938), 505–514. H aagerup, U., An example of a nonnuclear C*-algebra, which has the metric approximation property. Invent. Math., 50 (1978), 279–293. Digital Object Identifier: doi:10.1007/BF01410082 de la H arpe, P. & V alette, A., La propriété (T) de Kazhdan pour les groupes localement compacts. Astérisque, 175 (1989). H jorth, G., A lemma for cost attained. Ann. Pure Appl. Logic, 143 (2006), 87–102. Digital Object Identifier: doi:10.1016/j.apal.2005.05.034 J olissaint, P., Haagerup approximation property for finite von Neumann algebras. J. Operator Theory, 48 (2002), 549–571. J ones, V. F. R., Index for subfactors. Invent. Math., 72 (1983), 1–25. J ung, K., A hyperfinite inequality for free entropy dimension. Proc. Amer. Math. Soc., 134:7 (2006), 2099–2108. K aniuth, E., Der Typ der regulären Darstellung diskreter Gruppen. Math. Ann., 182 (1969), 334–339. K azhdan, D. A., On the connection of the dual space of a group with the structure of its closed subgroups. Funktsional. Anal. i Prilozhen., 1 (1967), 71–74 (Russian); English translation in Funct. Anal. Appl., 1 (1967), 63–65. K osaki, H., Free products of measured equivalence relations. J. Funct. Anal., 207 (2004), 264–299. Digital Object Identifier: doi:10.1016/j.jfa.2003.09.011 L yndon, R. C. & S chupp, P. E., Combinatorial Group Theory. Ergebnisse der Mathematik und ihrer Grenzgebiete, 89. Springer, Berlin–Heidelberg, 1977. M argulis, G. A., Finitely-additive invariant measures on Euclidean spaces. Ergodic Theory Dynam. Systems, 2 (1982), 383–396. M cD uff, D., Central sequences and the hyperfinite factor. Proc. London Math. Soc., 21 (1970), 443–461. Digital Object Identifier: doi:10.1112/plms/s3-21.3.443 M onod, N. & S halom, Y., Orbit equivalence rigidity and bounded cohomology. Ann. of Math., 164 (2006), 825–878. Digital Object Identifier: doi:10.4007/annals.2006.164.825 N icoara, R., P opa, S. & S asyk, R., On II1 factors arising from 2-cocycles of w-rigid groups. J. Funct. Anal., 242 (2007), 230–246. O zawa, N., A Kurosh-type theorem for type II1 factors. Int. Math. Res. Not., 2006 (2006), Art. ID 97560. P eterson, J., A 1-cohomology characterization of property (T) in von Neumann algebras. Preprint, 2004. arXiv:math.OA/0409527. P eterson, J. & P opa, S., On the notion of relative property (T) for inclusions of von Neumann algebras. J. Funct. Anal., 219 (2005), 469–483. P opa, S., Orthogonal pairs of *-subalgebras in finite von Neumann algebras. J. Operator Theory, 9 (1983), 253–268. — Markov traces on universal Jones algebras and subfactors of finite index. Invent. Math., 111 (1993), 375–405. — Free-independent sequences in type II1 factors and related problems, in Recent Advances in Operator Algebras (Orléans, 1992). Astérisque, 232 (1995), 187–202. — Some properties of the symmetric enveloping algebra of a subfactor, with applications to amenability and property T. Doc. Math., 4 (1999), 665–744. — On a class of type II1 factors with Betti numbers invariants. Ann. of Math., 163 (2006), 809–899. — Some computations of 1-cohomology groups and construction of non-orbit-equivalent actions. J. Inst. Math. Jussieu, 5 (2006), 309–332. — Some rigidity results for non-commutative Bernoulli shifts. J. Funct. Anal., 230 (2006), 273–328. — Strong rigidity of II1 factors arising from malleable actions of w-rigid groups. I, II. Invent. Math., 165 (2006), 369–408, 409–451. Digital Object Identifier: doi:10.1007/s00222-006-0501-4 — A unique decomposition result for HT factors with torsion free core. J. Funct. Anal., 242 (2007), 519–525. P opa, S. & S asyk, R., On the cohomology of Bernoulli actions. Ergodic Theory Dynam. Systems, 27 (2007), 241–251. S halom, Y., Measurable group theory, in European Congress of Mathematics, pp. 391–423. Eur. Math. Soc., Zürich, 2005. S hlyakhtenko, D., On the classification of full factors of type III. Trans. Amer. Math. Soc., 356 (2004), 4143–4159. T homa, E., Eine Charakterisierung diskreter Gruppen vom Typ I. Invent. Math., 6 (1968), 190–196. T örnquist, A., Orbit equivalence and actions of Fn. J. Symbolic Logic, 71 (2006), 265–282. Digital Object Identifier: doi:10.2178/jsl/1140641174 U eda, Y., Amalgamated free product over Cartan subalgebra. Pacific J. Math., 191 (1999), 359–392. Digital Object Identifier: doi:10.2140/pjm.1999.191.359 — Notes on treeability and costs for discrete groupoids in operator algebra framework, in Operator Algebras (Abel Symposium 2004), Abel Symp., 1, pp. 259–279. Springer, Berlin–Heidelberg, 2006. V alette, A., Group pairs with property (T), from arithmetic lattices. Geom. Dedicata, 112 (2005), 183–196. V oiculescu, D., Symmetries of some reduced free product C*-algebras, in Operator Algebras and their Connections with Topology and Ergodic Theory (Bu°teni, 1983), Lecture Notes in Math., 1132, pp. 556–588. Springer, Berlin–Heidelberg, 1985. Digital Object Identifier: doi:10.1007/BFb0074909 — The analogues of entropy and of Fisher's information measure in free probability theory. II. Invent. Math., 118 (1994), 411–440. Z immer, R. J., Ergodic Theory and Semisimple Groups. Monographs in Mathematics, 81. Birkhäuser, Basel, 1984. Institut Mittag-Leffler Email RSS ToC RSS Article What is MathJax? Unique Cartan decomposition for II1 factors arising from arbitrary actions of free groups Popa, Sorin and Vaes, Stefaan, Acta Mathematica, 2014 Bass-Serre rigidity results in von Neumann algebras Chifan, Ionut and Houdayer, Cyril, Duke Mathematical Journal, 2010 Constructing MASAs with prescribed properties Popa, Sorin, Kyoto Journal of Mathematics, 2019 Paving over arbitrary MASAs in von Neumann algebras Popa, Sorin and Vaes, Stefaan, Analysis & PDE, 2015 On Irreducibility of an Induced Representation of a Simply Connected Nilpotent Lie Group Koffi, Adjiey Jean-Luc and Kangni, Kinvi, African Diaspora Journal of Mathematics, 2015 THE CROSSED PRODUCT VON NEUMANN ALGEBRAS ASSOCIATED WITH $SL_2(\mathbb{R})$ Wu, Wenming and Yuan, Wei, Taiwanese Journal of Mathematics, 2010 A note on the C-numerical radius and the λ-Aluthge transform in finite factors Zhou, Xiaoyan, Fang, Junsheng, and Wen, Shilin, Annals of Functional Analysis, 2018 Integral formulas for hypermonogenic functions Eriksson, Sirkka-Liisa, Bulletin of the Belgian Mathematical Society - Simon Stevin, 2005 Applications of the Summability Theory to the Solvability of Certain Sequence Spaces Equations with Operators of the Form $B\left( r,s\right) $ de Malafosse, B., Communications in Mathematical Analysis, 2012 The Median of a Continuous Function Noah, Sunny Garlang, Real Analysis Exchange, 2008 euclid.acta/1485891958
CommonCrawl
eISSN: Discrete & Continuous Dynamical Systems - B May 2013 , Volume 18 , Issue 3 Select all articles Export/Reference: Mathematics of traveling waves in chemotaxis --Review paper-- Zhi-An Wang 2013, 18(3): 601-641 doi: 10.3934/dcdsb.2013.18.601 +[Abstract](2109) +[PDF](993.0KB) This article surveys the mathematical aspects of traveling waves of a class of chemotaxis models with logarithmic sensitivity, which describe a variety of biological or medical phenomena including bacterial chemotactic motion, initiation of angiogenesis and reinforced random walks. The survey is focused on the existence, wave speed, asymptotic decay rates, stability and chemical diffusion limits of traveling wave solutions. The main approaches are reviewed and related analytical results are given with sketchy proofs. We also develop some new results with detailed proofs to fill the gap existing in the literature. The numerical simulations of steadily propagating waves will be presented along the study. Open problems are proposed for interested readers to pursue. Zhi-An Wang. Mathematics of traveling waves in chemotaxis --Review paper--. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 601-641. doi: 10.3934\/dcdsb.2013.18.601. Random attractors for stochastic FitzHugh-Nagumo systems driven by deterministic non-autonomous forcing Abiti Adili and Bixiang Wang This paper is concerned with the asymptotic behavior of solutions of the FitzHugh-Nagumo system on $\mathbb{R}^n$ driven by additive noise and deterministic non-autonomous forcing. We prove the system has a random attractor which pullback attracts all tempered random sets. We also prove the periodicity of the random attractor when the system is perturbed by time periodic forcing. The pullback asymptotic compactness of solutions is established by uniform estimates on the tails of solutions outside a large ball in $\mathbb{R}^n$. Abiti Adili, Bixiang Wang. Random attractors forstochastic FitzHugh-Nagumo systemsdriven by deterministic non-autonomous forcing. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 643-666. doi: 10.3934\/dcdsb.2013.18.643. The spectral collocation method for stochastic differential equations Can Huang and Zhimin Zhang In this paper, we use the Chebyshev spectral collocation method to solve a certain type of stochastic differential equations (SDEs). We also use this method to estimate parameters of stochastic differential equations from discrete observations by maximum likelihood technique and Kessler technique. Our numerical tests shows that the spectral method gives better results than the Euler's method and the Shoji-Ozaki method. Can Huang, Zhimin Zhang. The spectral collocation method for stochasticdifferential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 667-679. doi: 10.3934\/dcdsb.2013.18.667. On the Vlasov-Poisson-Fokker-Planck equation near Maxwellian Hyung Ju Hwang and Juhi Jang We establish the exponential time decay rate of smooth solutions of small amplitude to the Vlasov-Poisson-Fokker-Planck equations to the Maxwellian both in the whole space and in the periodic box via the uniform-in-time energy estimates and also the macroscopic equations. Hyung Ju Hwang, Juhi Jang. On the Vlasov-Poisson-Fokker-Planck equation near Maxwellian. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 681-691. doi: 10.3934\/dcdsb.2013.18.681. Nonlocal generalized models of predator-prey systems Christian Kuehn and Thilo Gross The method of generalized modeling has been used to analyze differential equations arising in applications. It makes minimal assumptions about the precise functional form of the differential equation and the quantitative values of the steady-states which it aims to analyze from a dynamical systems perspective. The method has been applied successfully in many different contexts, particularly in ecology and systems biology, where the key advantage is that one does not have to select a particular model but is able to provide directly applicable conclusions for sets of models simultaneously. Although many dynamical systems in mathematical biology exhibit steady-state behaviour one also wants to understand nonlocal dynamics beyond equilibrium points. In this paper we analyze predator-prey dynamical systems and extend the method of generalized models to periodic solutions. First, we adapt the equilibrium generalized modeling approach and compute the unique Floquet multiplier of the periodic solution which depends upon so-called generalized elasticity and scale functions. We prove that these functions also have to satisfy a flow on parameter (or moduli) space. Then we use Fourier analysis to provide computable conditions for stability and the moduli space flow. The final stability analysis reduces to two discrete convolutions which can be interpreted to understand when the predator-prey system is stable and what factors enhance or prohibit stable oscillatory behaviour. Finally, we provide a sampling algorithm for parameter space based on nonlinear optimization and the Fast Fourier Transform which enables us to gain a statistical understanding of the stability properties of periodic predator-prey dynamics. Christian Kuehn, Thilo Gross. Nonlocal generalized models of predator-prey systems. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 693-720. doi: 10.3934\/dcdsb.2013.18.693. Robustness of Morphogen gradients with "bucket brigade" transport through membrane-associated non-receptors Jinzhi Lei, Dongyong Wang, You Song, Qing Nie and Frederic Y. M. Wan Robust multiple-fate morphogen gradients are essential for embryo development. Here, we analyze mathematically a model of morphogen gradient (such as Dpp in Drosophila wing imaginal disc) formation in the presence of non-receptors with both diffusion of free morphogens and the movement of morphogens bound to non-receptors. Under the assumption of rapid degradation of unbound morphogen, we introduce a method of functional boundary value problem and prove the existence, uniqueness and linear stability of a biologically acceptable steady-state solution. Next, we investigate the robustness of this steady-state solution with respect to significant changes in the morphogen synthesis rate. We prove that the model is able to produce robust biological morphogen gradients when production and degradation rates of morphogens are large enough and non-receptors are abundant. Our results provide mathematical and biological insight to a mechanism of achieving stable robust long distance morphogen gradients. Key elements of this mechanism are rapid turnover of morphogen to non-receptors of neighoring cells resulting in significant degradation and transport of non-receptor-morphogen complexes, the latter moving downstream through a "bucket brigade" process. Jinzhi Lei, Dongyong Wang, You Song, Qing Nie, Frederic Y. M. Wan. Robustness of Morphogen gradients with \"bucket brigade\" transport through membrane-associated non-receptors. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 721-739. doi: 10.3934\/dcdsb.2013.18.721. Multidimensional stability of planar traveling waves for an integrodifference model Judith R. Miller and Huihui Zeng This paper studies the multidimensional stability of planar traveling waves for integrodifference equations. It is proved that for a Gaussian dispersal kernel, if the traveling wave is exponentially orbitally stable in one space dimension, then the corresponding planar wave is stable in $H^m(\mathbb{R}^N)$, $N\ge 4$, $m\ge [N/2]+1$, with the perturbation decaying at algebraic rate. Judith R. Miller, Huihui Zeng. Multidimensional stability of planar traveling waves for an integrodifference model. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 741-751. doi: 10.3934\/dcdsb.2013.18.741. Adaptation and migration of a population between patches Sepideh Mirrahimi A Hamilton-Jacobi formulation has been established previously for phenotypically structured population models where the solution concentrates as Dirac masses in the limit of small diffusion. Is it possible to extend this approach to spatial models? Are the limiting solutions still in the form of sums of Dirac masses? Does the presence of several habitats lead to polymorphic situations? We study the stationary solutions of a structured population model, while the population is structured by continuous phenotypical traits and discrete positions in space. The growth term varies from one habitable zone to another, for instance because of a change in the temperature. The individuals can migrate from one zone to another with a constant rate. The mathematical modeling of this problem, considering mutations between phenotypical traits and competitive interaction of individuals within each zone via a single resource, leads to a system of coupled parabolic integro-differential equations. We study the asymptotic behavior of the stationary solutions to this model in the limit of small mutations. The limit, which is a sum of Dirac masses, can be described with the help of an effective Hamiltonian. The presence of migration can modify the dominant traits and lead to polymorphic situations. Sepideh Mirrahimi. Adaptation and migration of a population between patches. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 753-768. doi: 10.3934\/dcdsb.2013.18.753. Multiple steady states in a mathematical model for interactions between T cells and macrophages Alan D. Rendall The aim of this paper is to prove results about the existence and stability of multiple steady states in a system of ordinary differential equations introduced by R. Lev Bar-Or [5] to model the interactions between T cells and macrophages. Previous results showed that for certain values of the parameters these equations have three stationary solutions, two of which are stable. Here it is shown that there are values of the parameters for which the number of stationary solutions is at least seven and the number of stable stationary solutions at least four. This requires approaches different to those used in existing work on this subject. In addition, a rather explicit characterization is obtained of regions of parameter space for which the system has a given number of stationary solutions. Alan D. Rendall. Multiple steady states in a mathematical model for interactions betweenT cells and macrophages. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 769-782. doi: 10.3934\/dcdsb.2013.18.769. Dynamical analysis in growth models: Blumberg's equation J. Leonel Rocha and Sandra M. Aleixo We present a new dynamical approach to the Blumberg's equation, a family of unimodal maps. These maps are proportional to $Beta(p,q)$ probability densities functions. Using the symmetry of the $Beta(p,q)$ distribution and symbolic dynamics techniques, a new concept of mirror symmetry is defined for this family of maps. The kneading theory is used to analyze the effect of such symmetry in the presented models. The main result proves that two mirror symmetric unimodal maps have the same topological entropy. Different population dynamics regimes are identified, when the intrinsic growth rate is modified: extinctions, stabilities, bifurcations, chaos and Allee effect. To illustrate our results, we present a numerical analysis, where are demonstrated: monotonicity of the topological entropy with the variation of the intrinsic growth rate, existence of isentropic sets in the parameters space and mirror symmetry. J. Leonel Rocha, Sandra M. Aleixo. Dynamical analysis in growth models: Blumberg\'s equation. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 783-795. doi: 10.3934\/dcdsb.2013.18.783. The long time behavior of a spectral collocation method for delay differential equations of pantograph type Jie Tang, Ziqing Xie and Zhimin Zhang In this paper, we propose an efficient numerical method for delay differential equations with vanishing proportional delay qt (0 < q < 1). The algorithm is a mixture of the Legendre-Gauss collocation method and domain decomposition. It has global convergence and spectral accuracy provided that the data in the given pantograph delay differential equation are sufficiently smooth. Numerical results demonstrate the spectral accuracy of this approach and coincide well with theoretical analysis. Jie Tang, Ziqing Xie, Zhimin Zhang. The long time behavior of a spectral collocation method for delay differentialequationsof pantograph type. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 797-819. doi: 10.3934\/dcdsb.2013.18.797. Large-time behavior of a parabolic-parabolic chemotaxis model with logarithmic sensitivity in one dimension Youshan Tao, Lihe Wang and Zhi-An Wang This paper deals with the chemotaxis system $$ \left\{ \begin{array}{ll} u_t ={D} u_{xx}-\chi [u(\ln v)_x]_x, & x\in (0, 1), \ t>0,\\ v_t =\varepsilon v_{xx} +uv-\mu v, & x\in (0, 1), \ t>0, \end{array} \right. $$ under Neumann boundary condition, where $\chi<0$, $D>0$, $\varepsilon>0$ and $\mu>0$ are constants. It is shown that for any sufficiently smooth initial data $(u_0, v_0)$ fulfilling $u_0\ge 0$, $u_0 \not\equiv 0$ and $v_0>0$, the system possesses a unique global smooth solution that enjoys exponential convergence properties in $L^\infty(\Omega)$ as time goes to infinity, which depend on the sign of $\mu-\bar{u}_0$, where $\bar{u}_0 :=\int_0^1 u_0 dx$. Moreover, we prove that the constant pair $(\mu, (\frac{\mu}{\lambda})^{\frac{D}{\chi}})$ (where $\lambda>0$ is an arbitrary constant) is the only positive stationary solution. The biological implications of our results will be given in the paper. Youshan Tao, Lihe Wang, Zhi-An Wang. Large-time behavior of a parabolic-parabolic chemotaxis modelwith logarithmicsensitivity in one dimension. Discrete & Continuous Dynamical Systems - B, 2013, 18(3): 821-845. doi: 10.3934\/dcdsb.2013.18.821. RSS this journal Tex file preparation Open Choice Abstracted in Call for special issues Add your name and e-mail address to receive news of forthcoming issues of this journal: Select the journal Select Journals
CommonCrawl
Hierarchical approximations for data reduction and learning at multiple scales FoDS Home Stability of sampling for CUR decompositions June 2020, 2(2): 101-121. doi: 10.3934/fods.2020007 Topological reconstruction of sub-cellular motion with Ensemble Kalman velocimetry Le Yin 1, , Ioannis Sgouralis 2, and Vasileios Maroulas 1,, Mathematics Dept., University of Tennessee, Knoxville, TN 37996, USA Center for Biological Physics, Arizona State University, Tempe, AZ 85281, USA * Corresponding author: [email protected] Full Text(HTML) Figure(13) / Table(1) Microscopy imaging of plant cells allows the elaborate analysis of sub-cellular motions of organelles. The large video data set can be efficiently analyzed by automated algorithms. We develop a novel, data-oriented algorithm, which can track organelle movements and reconstruct their trajectories on stacks of image data. Our method proceeds with three steps: (ⅰ) identification, (ⅱ) localization, and (ⅲ) linking. This method combines topological data analysis and Ensemble Kalman Filtering, and does not assume a specific motion model. Application of this method on simulated data sets shows an agreement with ground truth. We also successfully test our method on real microscopy data. Keywords: Multi-target tracking, velocimetry, topological data analysis, ensemble Kalman filter, fluorescence microscopy. Mathematics Subject Classification: Primary: 62M20, 62R40; Secondary: 62H35. Citation: Le Yin, Ioannis Sgouralis, Vasileios Maroulas. Topological reconstruction of sub-cellular motion with Ensemble Kalman velocimetry. Foundations of Data Science, 2020, 2 (2) : 101-121. doi: 10.3934/fods.2020007 P. Bendich, S. P. Chin, J. Clark, J. Desena, J. Harer, E. Munch, A. Newman, D. Porter, D. Rouse and N. Strawn et al., Topological and statistical behavior classifiers for tracking applications, IEEE Transactions on Aerospace and Electronic Systems, 52 (2016), 2644-2661. Google Scholar G. Bishop, An Introduction to the Kalman Filter, Technical report, TR 95-041, Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3175, Monday, 2006. Google Scholar S. S. Blackman, Multiple-target Tracking with Radar Applications, Dedham, MA, Artech House, Inc., 1986. Google Scholar S. S. v. Braun and E. Schleiff, Movement of endosymbiotic organelles, Current Protein and Peptide Science, 8 (2007), 426-438. Google Scholar G. Cai, L. Parrotta and M. Cresti, Organelle trafficking, the cytoskeleton, and pollen tube growth, Journal of Integrative Plant biology, 57 (2015), 63-78. Google Scholar G. Carlsson, Topology and data, Bull. Amer. Math. Soc. (N.S.), 46 (2009), 255-308. doi: 10.1090/S0273-0979-09-01249-X. Google Scholar G. Casella and R. L. Berger, Statistical Inference, The Wadsworth & Brooks/Cole Statistics/Probability Series. Wadsworth & Brooks/Cole Advanced Books & Software, Pacific Grove, CA, 1990. Google Scholar N. Chenouard, I. Bloch and J.-C. Olivo-Marin, Multiple hypothesis tracking for cluttered biological image sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, 35 (2013), 2736-3750. Google Scholar D. A. Collings, J. D. Harper, J. Marc, R. L. Overall and R. T. Mullen, Life in the fast lane: Actin-based motility of plant peroxisomes, Canadian Journal of Botany, 80 (2002), 430-441. Google Scholar G. Danuser, Computer vision in cell biology, Cell, 147 (2011), 973-978. Google Scholar J. Derksen, Pollen tubes: A Model system for plant cell growth, Botanica Acta, 109 (1996), 341-345. Google Scholar H. Edelsbrunner and J. L. Harer, Computational Topology. An Introduction, American Mathematical Soc., Providence, RI, 2010. Google Scholar G. B. Folland, Real Analysis: Modern Techniques and Their Applications, Pure and Applied Mathematics (New York). A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1999. Google Scholar [14] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari and D. B. Rubin, Bayesian Data Analysis, Texts in Statistical Science Series. CRC Press, Boca Raton, FL, 2014. Google Scholar R. Gutierrez, J. J. Lindeboom, A. R. Paredez, A. M. C. Emons and D. W. Ehrhardt, Arabidopsis cortical microtubules position cellulose synthase delivery to the plasma membrane and interact with cellulose synthase trafficking compartments, Nature Cell Biology, 11 (2009), 797. Google Scholar T. Hamada, M. Tominaga, T. Fukaya, M. Nakamura, A. Nakano, Y. Watanabe, T. Hashimoto and T. I. Baskin, Rna processing bodies, peroxisomes, golgi bodies, mitochondria, and endoplasmic reticulum tubule junctions frequently pause at cortical microtubules, Plant and Cell Physiology, 53 (2012), 699-708. Google Scholar B. Herman and D. F. Albertini, A time-lapse video image intensification analysis of cytoplasmic organelle movements during endosome translocation, The Journal of Cell Biology, 98 (1984), 565-576. Google Scholar M. Hirsch, R. J. Wareham, M. L. Martin-Fernandez, M. P. Hobson and D. J. Rolfe, A stochastic model for electron multiplication charge-coupled devices–from theory to practice, PloS One, 8 (2013), e53671. Google Scholar B. Huang, M. Bates and X. Zhuang, Super-resolution fluorescence microscopy, Annual Review of Biochemistry, 78 (2009), 993-1016. Google Scholar F. Huang, T. M. Hartwich, F. E. Rivera-Molina, Y. Lin, W. C. Duim, J. J. Long, P. D. Uchil, J. R. Myers, M. A. Baird, W. Mothes et al., Video-rate nanoscopy using sCMOS camera–specific single-molecule localization algorithms, Nature Methods, 10 (2013), 653. Google Scholar A. Jaiswal, W. J. Godinez, M. J. Lehmann and K. Rohr, Direct combination of multi-scale detection and multi-frame association for tracking of virus particles in microscopy image data, in 2016 IEEE 13th International Symposium on Biomedical Imaging (ISBI), IEEE, 2016,976–979. Google Scholar J. Janesick and T. Elliott, History and advancement of large array scientific ccd imagers, in Astronomical CCD Observing and Reduction Techniques, 23 (1992), 1. Google Scholar S. Jazani, I. Sgouralis and S. Pressé, A method for single molecule tracking using a conventional single-focus confocal setup, The Journal of Chemical Physics, 150 (2019), 114108. Google Scholar S. Jazani, I. Sgouralis, O. M. Shafraz, M. Levitus, S. Sivasankar and S. Pressé, An alternative framework for fluorescence correlation spectroscopy, Nature Communications, 10. Google Scholar K. Kang, V. Maroulas, I. Schizas and F. Bao, Improved distributed particle filters for tracking in a wireless sensor network, Comput. Statist. Data Anal., 117 (2018), 90-108. doi: 10.1016/j.csda.2017.07.009. Google Scholar K. Kang, V. Maroulas, I. D. Schizas and E. Blasch, A multilevel homotopy MCMC sequential Monte Carlo filter for multi-target tracking, in Information Fusion (FUSION), 2016 19th International Conference on, IEEE, (2016), 2015–2021. Google Scholar A. Lee, K. Tsekouras, C. Calderon, C. Bustamante and S. Pressé, Unraveling the thousand word picture: An introduction to super-resolution data analysis, Chemical Reviews, 117 (2017), 7276-7330. Google Scholar J. W. Lichtman and J.-A. Conchello, Fluorescence microscopy, Nature Methods, 2 (2005), 910. Google Scholar I. Lichtscheidl and I. Foissner, Video microscopy of dynamic plant cell organelles: Principles of the technique and practical application, Journal of Microscopy, 181 (1996), 117-128. Google Scholar S. Liu, M. J. Mlodzianoski, Z. Hu, Y. Ren, K. McElmurry, D. M. Suter and F. Huang, sCMOSnoise-correction algorithm for microscopy images, Nature Methods, 14 (2017), 760. Google Scholar C. W. Lloyd, The plant cytoskeleton: The impact of fluorescence microscopy, Annual Review of Plant Physiology, 38 (1987), 119-137. doi: 10.1146/annurev.pp.38.060187.001003. Google Scholar D. C. Logan and C. J. Leaver, Mitochondria-targeted GFP highlights the heterogeneity of mitochondrial shape, size and movement within living plant cells, Journal of Experimental Botany, 51 (2000), 865-871. doi: 10.1093/jexbot/51.346.865. Google Scholar V. Maroulas and A. Nebenführ, Tracking rapid intracellular movements: A Bayesian random set approach, Ann. Appl. Stat., 9 (2015), 926-949. doi: 10.1214/15-AOAS819. Google Scholar V. Maroulas and P. Stinis, Improved particle filters for multi-target tracking, J. Comput. Phys., 231 (2012), 602-611. doi: 10.1016/j.jcp.2011.09.023. Google Scholar J. Munkres, Topology, Pearson Education, 2014. Google Scholar A. Nebenführ, Identifying subcellular protein localization with fluorescent protein fusions after transient expression in onion epidermal cells, in Plant Cell Morphogenesis, Springer, (2014), 77–85. Google Scholar A. Nebenführ and R. Dixit, Kinesins and myosins: Molecular motors that coordinate cellular functions in plants, Annual Review of Plant Biology, 69 (2018), 329-361. Google Scholar A. Nebenführ, L. A. Gallagher, T. G. Dunahay, J. A. Frohlick, A. M. Mazurkiewicz, J. B. Meehl and L. A. Staehelin, Stop-and-go movements of plant golgi stacks are mediated by the acto-myosin system, Plant Physiology, 121 (1999), 1127-1141. Google Scholar B. K. Nelson, X. Cai and A. Nebenführ, A multicolored set of in vivo organelle markers for co-localization studies in Arabidopsis and other plants, The Plant Journal, 51 (2007), 1126-1136. doi: 10.1111/j.1365-313X.2007.03212.x. Google Scholar T. D. Pollard and J. A. Cooper, Actin, a central player in cell shape and movement, Science, 326 (2009), 1208-1212. doi: 10.1126/science.1175862. Google Scholar G. Ren, V. Maroulas and I. Schizas, Distributed spatio-temporal association and tracking of multiple targets using multiple sensors, IEEE Transactions on Aerospace and Electronic Systems, 51 (2015), 2570-2589. doi: 10.1109/TAES.2015.140042. Google Scholar G. Ren, V. Maroulas and I. D. Schizas, Exploiting sensor mobility and covariance sparsity for distributed tracking of multiple sparse targets, EURASIP Journal on Advances in Signal Processing, 2016 (2016), 53. doi: 10.1186/s13634-016-0354-y. Google Scholar I. F. Sbalzarini and P. Koumoutsakos, Feature point tracking and trajectory analysis for video imaging in cell biology, Journal of Structural Biology, 151 (2005), 182-195. doi: 10.1016/j.jsb.2005.06.002. Google Scholar I. Sgouralis, A. Nebenführ and V. Maroulas, A Bayesian topological framework for the identification and reconstruction of subcellular motion, SIAM J. Imaging Sci., 10 (2017), 871-899. doi: 10.1137/16M1095755. Google Scholar D. M. Shotton, Video-enhanced light microscopy and its applications in cell biology, J. Cell Sci., 89 (1988), 129-150. Google Scholar G. Singh, F. Mémoli and G. E. Carlsson, Topological methods for the analysis of high dimensional data sets and 3d object recognition, in SPBG, (2007), 91–100. Google Scholar I. Smal, K. Draegestein, N. Galjart, W. Niessen and E. Meijering, Particle filtering for multiple object tracking in dynamic fluorescence microscopy images: Application to microtubule growth analysis, IEEE Transactions on Medical Imaging, 27 (2008), 789-804. doi: 10.1109/TMI.2008.916964. Google Scholar I. Smal, W. Niessen and E. Meijering, Particle filtering for multiple object tracking in molecular cell biology, in Nonlinear Statistical Signal Processing Workshop, 2006 IEEE, IEEE, (2006), 129–132. doi: 10.1109/NSSPW.2006.4378836. Google Scholar D. L. Snyder, A. M. Hammoud and R. L. White, Image recovery from data acquired with a charge-coupled-device camera, JOSA A, 10 (1993), 1014-1023. doi: 10.1364/JOSAA.10.001014. Google Scholar E. H. Spanier, Algebraic Topology, vol. 55, Springer Science & Business Media, 1989. Google Scholar I. A. Sparkes, Motoring around the plant cell: Insights from plant myosins, Biochem Soc. Trans., 38 (2010), 833–838. doi: 10.1042/BST0380833. Google Scholar L. D. Stone, R. L. Streit, T. L. Corwin and K. L. Bell, Bayesian Multiple Target Tracking, Artech House, 2013. Google Scholar M. Tavakoli, S. Jazani, I. Sgouralis, O. M. Shafraz, S. Sivasankar, B. Donaphon, M. Levitus and S. Pressé, Pitching single-focus confocal data analysis one photon at a time with bayesian nonparametrics, Phys. Rev. X, 10 (2020), 011021. doi: 10.1103/PhysRevX.10.011021. Google Scholar J. K. Vick and A. Nebenführ, Putting on the breaks: Regulating organelle movements in plant cells, Journal of Integrative Plant Biology, 54 (2012), 868-874. doi: 10.1111/j.1744-7909.2012.01180.x. Google Scholar Figure 1. The motion of organelles, during an experiment starting at $ t_1 = 0 $ ending at $ t_N = T $, is identified at discrete times $ t_n $ (dots). For simplicity, space is represented with one dimension, although real datasets are two dimensional. The black dots represent the locations of organelles at different time levels. $ \tilde {\mathcal{R}} $ is the set contains the locations of all black dots Figure Options Download full-size image Download as PowerPoint slide Figure 2. Here, $ \bar x^j $ is the position of an organelle producing the images shown (gray), $ \bar f_{n, +} $ and $ \bar f_{n, -} $ illustrate 1-level forward displacement and backward displacement of $ \bar x^j $, respectively. For clarity, the image produced by the organelle are shown as multi-peaked and space as 1D Figure 3. The relations of forward fields and backward fields are indicated here. (a) shows the approach depiction of forward displacement fields, (b) shows the approach depiction of backward displacement fields. For clarity, time marches forward in (a) and backward in (b) Figure 4. (a) shows $ \tilde {\mathcal{R}} $ as black dots and $ {\mathcal{R}} $ as gray lines; (b) shows $ {\mathcal{R}} $, $ {\mathcal{T}}_n $ in Eq. (12) and $ P_ {\mathcal{R}}^{-1}( {\mathcal{T}}_n) $ as blue segments; (c) shows $ {\mathcal{R}} $, $ {\mathcal{T}}_n $, $ P_ {\mathcal{R}}^{-1}( {\mathcal{T}}_n) $, $ \tilde R $ and reconstructed discrete trajectories. For visualization purpose, space is shown in 1D Figure 5. Case I: The frame size is 320 by 320 pixels. Trajectories of 20 organelles are in red spanning from time $ t = 0\; $s to $ t = 99\; $s. Their motion is described by a diffusion process containing both a diffusion and a drift term. The starting distance of any two adjacent organelles at $ t = 0\; $s is 10 pixels Figure 6. Case I: Four histograms of mean error of each frame. Each one compares estimated forward and backward displacement with ground truth along $ x $-axis and $ y $-axis, respectively Figure 7. Case I: Linking result of all trajectories in red. The accuracy rate is 100% Figure 8. Case I: Positions of organelles over time after adding perturbation $ U(-\epsilon, \epsilon) $ when $ \epsilon = 0, \ 1, \ 1.5, \ 2, \ 2.5, \ 3, \ 3.5, \ 4\; $pixels, respectively. If $ \epsilon $ increases, it is more difficult to detect trajectories, especially, when $ \epsilon = 4\; $ pixels, there are no clear patterns for all trajectories to be reconstructed Figure 9. Case II: The left shows the rough detection result, the right show the locations after correction. The red dots in the left penal and blue pentagons in the right penal are the original locations before Bayesian identification. The red pentagons in the right panel are the fitted location after Bayesian identification Figure 10. Case II: Estimated displacement fields of 17th frame using EnKF. Panel (a) shows the estimated displacement fields for the entire focal plane. Panel (b) shows the enlarged area of $ [140,230]\times[260,350] $ in Penal (a) Figure 11. Case II: Trajectories reconstruction result Figure 12. Case II: Four specific sets of trajectory reconstructions vs ground truth. Each panel shows reconstructions versus one true trajectory. The upper left is amplified from the area $ [290,380]\times[40,130] $ in Fig. 11; the upper right is amplified from the area $ [40,190]\times[190,340] $ in Fig. 11; the bottom left is amplified from the area $ [80,210]\times[200,330] $ in Fig. 11; the bottom right is amplified from the area $ [230,380]\times[200,350] $ in Fig. 11; Figure 13. Case III: Panel (a) is the first frame of the video. Panel (b) exhibits all estimated trajectories in red. Panel (c) further shows each estimated trajectory in different colors Table 1. Case I: Table of detection result $ \epsilon $ (in pixels) total $> 10\; s $ $ =100\% $ $ \geq 90\% $ $ \geq 50\% $ $ \epsilon=1 $ 20 20 20 20 20 $ \epsilon=1.5 $ 20 20 20 20 20 $ \epsilon=3 $ 32 27 9 14 17 $ \epsilon=3.5 $ 33 30 5 10 15 $ \epsilon=4 $ 53 40 1 4 14 Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020 doi: 10.3934/fods.2020018 Geir Evensen, Javier Amezcua, Marc Bocquet, Alberto Carrassi, Alban Farchi, Alison Fowler, Pieter L. Houtekamer, Christopher K. Jones, Rafael J. de Moraes, Manuel Pulido, Christian Sampson, Femke C. Vossepoel. An international initiative of predicting the SARS-CoV-2 pandemic using ensemble data assimilation. Foundations of Data Science, 2020 doi: 10.3934/fods.2021001 Xin Guo, Lei Shi. Preface of the special issue on analysis in data science: Methods and applications. Mathematical Foundations of Computing, 2020, 3 (4) : i-ii. doi: 10.3934/mfc.2020026 A. M. Elaiw, N. H. AlShamrani, A. Abdel-Aty, H. Dutta. Stability analysis of a general HIV dynamics model with multi-stages of infected cells and two routes of infection. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020441 Giulia Cavagnari, Antonio Marigonda. Attainability property for a probabilistic target in wasserstein spaces. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 777-812. doi: 10.3934/dcds.2020300 Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278 Tian Ma, Shouhong Wang. Topological phase transition III: Solar surface eruptions and sunspots. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 501-514. doi: 10.3934/dcdsb.2020350 Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127 Lin Jiang, Song Wang. Robust multi-period and multi-objective portfolio selection. Journal of Industrial & Management Optimization, 2021, 17 (2) : 695-709. doi: 10.3934/jimo.2019130 Bilel Elbetch, Tounsia Benzekri, Daniel Massart, Tewfik Sari. The multi-patch logistic equation. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021025 Hyung-Chun Lee. Efficient computations for linear feedback control problems for target velocity matching of Navier-Stokes flows via POD and LSTM-ROM. Electronic Research Archive, , () : -. doi: 10.3934/era.2020128 Paul E. Anderson, Timothy P. Chartier, Amy N. Langville, Kathryn E. Pedings-Behling. The rankability of weighted data from pairwise comparisons. Foundations of Data Science, 2021 doi: 10.3934/fods.2021002 George W. Patrick. The geometry of convergence in numerical analysis. Journal of Computational Dynamics, 2021, 8 (1) : 33-58. doi: 10.3934/jcd.2021003 Liang Huang, Jiao Chen. The boundedness of multi-linear and multi-parameter pseudo-differential operators. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020291 Anna Anop, Robert Denk, Aleksandr Murach. Elliptic problems with rough boundary data in generalized Sobolev spaces. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020286 Haruki Umakoshi. A semilinear heat equation with initial data in negative Sobolev spaces. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 745-767. doi: 10.3934/dcdss.2020365 Min Chen, Olivier Goubet, Shenghao Li. Mathematical analysis of bump to bucket problem. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5567-5580. doi: 10.3934/cpaa.2020251 Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 Shun Zhang, Jianlin Jiang, Su Zhang, Yibing Lv, Yuzhen Guo. ADMM-type methods for generalized multi-facility Weber problem. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020171 HTML views (339) Le Yin Ioannis Sgouralis Vasileios Maroulas Article outline
CommonCrawl
Global stability of traveling waves for a spatially discrete diffusion system with time delay ERA Home A four-field mixed finite element method for Biot's consolidation problems doi: 10.3934/era.2021006 $ C^* $-algebras associated with asymptotic equivalence relations defined by hyperbolic toral automorphisms Kengo Matsumoto Department of Mathematics, Joetsu University of Education, Joetsu, 943-8512, Japan Received December 2020 Published January 2021 Fund Project: This work was supported by JSPS KAKENHI Grant Numbers 15K04896, 19K03537 We study the $ C^* $-algebras of the étale groupoids defined by the asymptotic equivalence relations for hyperbolic automorphisms on the two-dimensional torus. The algebras are proved to be isomorphic to four-dimensional non-commutative tori by an explicit numerical computation. The ranges of the unique tracial states of its $ K_0 $-groups of the $ C^* $-algebras are described in terms of the hyperbolic matrices of the automorphisms on the torus. Keywords: Hyperbolic toral automorphisms, Smale space, asymptotic equivalence relation, étale groupoid, non-commutative tori. Mathematics Subject Classification: Primary: 37D20, 37A55; Secondary: 46L35. Citation: Kengo Matsumoto. $ C^* $-algebras associated with asymptotic equivalence relations defined by hyperbolic toral automorphisms. Electronic Research Archive, doi: 10.3934/era.2021006 R. L. Adler and R. Palais, Homeomorphic conjugacy of automorphisms of the torus, Proc. Amer. Math. Soc., 16 (1965), 1222-1225. doi: 10.1090/S0002-9939-1965-0193181-8. Google Scholar C. Anantharaman-Delaroche and J. Renault, Amenable Groupoids, L'Enseignement Mathématique, Genéve, 2000. Google Scholar F. P. Boca, The structure of higher-dimensional non-commutative tori and metric Diophantine approximation, J. Reine Angew. Math., 492 (1997), 179-219. doi: 10.1515/crll.1997.492.179. Google Scholar R. Bowen, Equilibrium States and the Ergodic Theory of Anosov Diffeomorphisms, Lecture Notes in Math., Vol. 470. Springer-Verlag, Berlin-New York, 1975. Google Scholar J. Cuntz and W. Krieger, A class of $C^{\ast} $-algebras and topological Markov chains, Invent. Math., 56 (1980), 251-268. doi: 10.1007/BF01390048. Google Scholar G. A. Elliott, On the $K$-theory of the $C^{\ast} $–algebra generated by a projective representation of a torsion-free discrete abelian group, in Operator Algebras and Group Representations, Pitman, Boston, MA, 17 (1984), 157–184. Google Scholar C. G. Holton, The Rohlin property for shifts of finite type, J. Funct. Anal., 229 (2005), 277-299. doi: 10.1016/j.jfa.2005.05.002. Google Scholar J. Kaminker and I. Putnam, $K$-theoretic duality of shifts of finite type, Comm. Math. Phys., 187 (1997), 509-522. doi: 10.1007/s002200050147. Google Scholar J. Kaminker, I. Putnam and J. Spielberg, Operator algebras and hyperbolic dynamics, Operator Algebras and Quantum Field Theory, 525-532, Int. Press, Cambridge, MA, 1997. Google Scholar D. B. Killough and I. F. Putnam, Ring and module structures on dimension groups associated with a shift of finite type, Ergodic Theory Dynam. Systems, 32 (2012), 1370-1399. doi: 10.1017/S0143385712000272. Google Scholar K. Matsumoto, Asymptotic continuous orbit equivalence of Smale spaces and Ruelle algebras, Canad. J. Math., 71 (2019), 1243-1296. doi: 10.4153/CJM-2018-012-x. Google Scholar K. Matsumoto, Topological conjugacy of topological Markov shifts and Ruelle algebras, J. Operator Theory, 82 (2019), 253-284. Google Scholar N. C. Phillips, Every simple higher dimensional non-commutative torus is an AT algebra, preprint, arXiv: math.OA/0609783. Google Scholar I. F. Putnam, $C^*$-algebras from Smale spaces, Canad. J. Math., 48 (1996), 175-195. doi: 10.4153/CJM-1996-008-2. Google Scholar I. F. Putnam, Hyperbolic Systems and Generalized Cuntz–Krieger Algebras, Lecture Notes, Summer School in Operator Algebras, Odense August 1996. Google Scholar I. F. Putnam, A homology theory for Smale spaces, Mem. Amer. Math. Soc. 232 (2014), No. 1094. doi: 10.1090/memo/1094. Google Scholar I. F. Putnam and J. Spielberg, The structure of $C^*$-algebras associated with hyperbolic dynamical systems, J. Funct. Anal., 163 (1999), 279-299. doi: 10.1006/jfan.1998.3379. Google Scholar J. Renault, A groupoid approach to $C^{\ast} $-algebras, Lecture Notes in Math., 793, Springer-Verlag, Berlin, Heidelberg and New York, 1980. Google Scholar J. Renault, Cartan subalgebras in $C^*$-algebras, Irish Math. Soc. Bull., 61 (2008), 29-63. Google Scholar M. A. Rieffel, $C^{\ast} $-algebras associated with irrational rotations, Pacific J. Math., 93 (1981), 415-429. doi: 10.2140/pjm.1981.93.415. Google Scholar M. A. Rieffel, Projective modules over higher-dimensional non-commutative tori, Canad. J. Math., 40 (1988), 257-338. doi: 10.4153/CJM-1988-012-9. Google Scholar D. Ruelle, Thermodynamic Formalism, Addison-Wesley, Reading, Mass., 1978. Google Scholar D. Ruelle, Non-commutative algebras for hyperbolic diffeomorphisms, Invent. Math., 93 (1988), 1-13. doi: 10.1007/BF01393685. Google Scholar J. Slawny, On factor representations and the $C^{\ast} $-algebra of canonical commutation relations, Comm. Math. Phys., 24 (1972), 151-170. doi: 10.1007/BF01878451. Google Scholar S. Smale, Differentiable dynamical systems, Bull. Amer. Math. Soc., 73 (1967), 747-817. doi: 10.1090/S0002-9904-1967-11798-1. Google Scholar K. Thomsen, $C^*$-algebras of homoclinic and heteroclinic structure in expansive dynamics, Mem. Amer. Math. Soc., 206 (2010), No. 970. doi: 10.1090/S0065-9266-10-00581-8. Google Scholar Yu-Jhe Huang, Zhong-Fu Huang, Jonq Juang, Yu-Hao Liang. Flocking of non-identical Cucker-Smale models on general coupling network. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 1111-1127. doi: 10.3934/dcdsb.2020155 Sergio Zamora. Tori can't collapse to an interval. Electronic Research Archive, , () : -. doi: 10.3934/era.2021005 Qiao Liu. Local rigidity of certain solvable group actions on tori. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 553-567. doi: 10.3934/dcds.2020269 Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278 Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380 Mostafa Mbekhta. Representation and approximation of the polar factor of an operator on a Hilbert space. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020463 Daniele Bartolucci, Changfeng Gui, Yeyao Hu, Aleks Jevnikar, Wen Yang. Mean field equations on tori: Existence and uniqueness of evenly symmetric blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3093-3116. doi: 10.3934/dcds.2020039 Ilyasse Lamrani, Imad El Harraki, Ali Boutoulout, Fatima-Zahrae El Alaoui. Feedback stabilization of bilinear coupled hyperbolic systems. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020434 Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185 Hirokazu Ninomiya. Entire solutions of the Allen–Cahn–Nagumo equation in a multi-dimensional space. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 395-412. doi: 10.3934/dcds.2020364 Dong-Ho Tsai, Chia-Hsing Nien. On space-time periodic solutions of the one-dimensional heat equation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3997-4017. doi: 10.3934/dcds.2020037 Petr Čoupek, María J. Garrido-Atienza. Bilinear equations in Hilbert space driven by paths of low regularity. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 121-154. doi: 10.3934/dcdsb.2020230 Liam Burrows, Weihong Guo, Ke Chen, Francesco Torella. Reproducible kernel Hilbert space based global and local image segmentation. Inverse Problems & Imaging, 2021, 15 (1) : 1-25. doi: 10.3934/ipi.2020048 Boris Andreianov, Mohamed Maliki. On classes of well-posedness for quasilinear diffusion equations in the whole space. Discrete & Continuous Dynamical Systems - S, 2021, 14 (2) : 505-531. doi: 10.3934/dcdss.2020361 Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217 Soonki Hong, Seonhee Lim. Martin boundary of brownian motion on gromov hyperbolic metric graphs. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021014 Zhiting Ma. Navier-Stokes limit of globally hyperbolic moment equations. Kinetic & Related Models, 2021, 14 (1) : 175-197. doi: 10.3934/krm.2021001 \begin{document}$ C^* $\end{document}-algebras associated with asymptotic equivalence relations defined by hyperbolic toral automorphisms" readonly="readonly">
CommonCrawl
24 Jul 2018 - 05 Jun 2019 Nathan Reed Blog Stuff I've Made Talks About Me Normals and the Inverse Transpose, Part 2: Dual Spaces Normals and the Inverse Transpose, Part 3: Grassmann On Duals July 22, 2018 · Graphics, Math · Comments Welcome back! In the last couple of articles, we learned about different ways to understand normal vectors in 3D space—either as bivectors (part 1), or as dual vectors (part 2). Both can be valid interpretations, but they carry different units, and react differently to transformations. In this third and final installment, we're going leave behind the focus on normal vectors, and explore a couple of other unitful vector quantities. We've seen how Grassmann bivectors and trivectors act as oriented areas and volumes, respectively; and we saw how dual vectors act as oriented line densities, with units of inverse length. Now, we're going to put these two geometric concepts together, and find out what they can accomplish with their combined powers. (Get it? Powers? Like powers of a scale factor? Uh, you know what, never mind.) I'm going to dive right in, so if you need a refresher on either Grassmann algebra or dual spaces, you may want to re-skim the previous articles. Wedge Products of Dual Vectors Dual Bivectors Dual Trivectors A Few More Topics The Interior Product The Hodge Star The Inner Product, or Forgetting About Duals What's The Use of All This? Organizing the Zoo Grassmann algebra allows us to take wedge products of vectors, producing higher-grade algebraic entities such as bivectors and trivectors. Just as we can do this with base vectors, we can do the same thing on dual vectors, producing dual bivectors and dual trivectors. A dual bivector is formed by wedging two dual vectors, like: $$ {\bf e_x^*} \wedge {\bf e_y^*} = {\bf e_{xy}^*} $$ and a dual trivector is the product of three: $$ {\bf e_x^*} \wedge {\bf e_y^*} \wedge {\bf e_z^*} = {\bf e_{xy}^*} \wedge {\bf e_z^*} = {\bf e_{xyz}^*} $$ This works exactly the same way that wedge products of ordinary vectors do; in particular, the same anticommutative law applies. So what's the geometric meaning of these dual $k$-vectors? Recall that a dual vector is defined as a linear form—a function from some vector space $V$ to scalars $\Bbb R$. Conveniently, the wedge products of dual vectors turn out to be isomorphic to the duals of wedge products of vectors. (Mathematically, we can say, for finite-dimensional $V$: $$ \textstyle \bigwedge^k \bigl( V^* \bigr) \cong \bigl(\bigwedge^k V \bigr)^* $$ where $\bigwedge^k$ is the operation to construct the set of $k$-vectors over a given base vector space.) The upshot is that dual $k$-vectors can be understood as linear forms on $k$-vectors: a dual bivector is a linear function from bivectors to scalars, and a dual trivector is a linear function from trivectors to scalars. Let's see how this works in more detail. In the previous article, we saw how a dual vector can be visualized as a field of parallel, uniformly spaced planes, representing the level sets of a linear form: Maschen (Wikipedia) You can think of the discrete planes in this picture as representing intervals of one unit in the output of the linear form. Keep in mind, though, that there are actually a continuous infinity of these planes, filling space—one for every possible output value of the linear form. When you evaluate the linear form—i.e. pair a dual vector with a vector—the result represents how many planes the vector crosses, from its tail to its tip (in a continuous-measure sense of "how many"). This will depend on both the length and orientation of the vector: for example, a vector parallel to the planes will return zero, no matter its length. A dual bivector can be thought of in a similar way—but instead of planes, we now picture a field of parallel lines, uniformly spaced over the plane perpendicular to them. As suggested by this diagram, when you wedge two dual vectors, the resulting dual bivector consists of all the lines of intersection of the two dual vectors' respective planes. What happens when we pair this dual bivector with a base bivector? As before, the result is a scalar—this time representing how many lines the bivector crosses! If you visualize the bivector as a parallelogram, or circle or any other shape, it will have a certain area. It will therefore intersect some quantity of the continuous mass of lines. This quantity won't depend on the shape of the bivector—remember, bivectors don't actually have any defined shape—only on its area (magnitude) and orientation. A bivector whose plane runs parallel to the lines will return zero, no matter its area. Because dual vectors have units of inverse length, and a dual bivector is a product of dual vectors, a dual bivector has units of inverse area. It represents an oriented areal density, such as a probability density over a surface! When you pair the dual bivector with a bivector, the result tells you how much probability (or whatever else) is covered by that bivector's area. And as implied by their units, dual bivectors scale as $1/a^2$. (If you scale an object up by a factor of $a$, the probablity density on its surface goes down by a factor of $a^2$, because the same total probability is now spread over an $a^2$-larger area.) How about the transformation rule for dual bivectors? Well, we learned in part 1 that bivectors transform as $\text{cofactor}(M)$; and in part 2, we found that dual vectors transform as the inverse transpose, $M^{-T}$. It follows that dual bivectors transform as $\text{cofactor}\bigl(M^{-T}\bigr)$, or equivalently $\bigl(\text{cofactor}(M)\bigr)^{-T}$. Startlingly, for 3×3 matrices these formulas reduce to just $$ \frac{M}{\det M} $$ So, dual bivectors simply transform using $M$ divided by its own determinant. Follow the pattern: if a dual vector in 3D looks like a stack of parallel planes, and a dual bivector looks like a field of parallel lines, then a dual trivector looks like a cloud of parallel points. Well, drop the "parallel"—it doesn't mean anything. It's just uniformly spaced points. As before, the wedge product of three dual vectors—or a dual vector and dual bivector—constructs the continuous point cloud made of all the intersection points of the wedge factors. This quantity scales as $1/a^3$ and represents a volume density. When you pair it with a trivector, the result tells you how much of the point cloud is enclosed in that trivector's volume. The transformation rule for this one is easy—dual trivectors in 3D just get multiplied by $1/\det M$. With the introduction of dual bi- and trivectors, our "scaling zoo" is now complete! We've got the full ecosystem of vectorial quantities with scaling powers from −3 to +3, each with its proper units and matching transformation formula. In the rest of this section, I'll quickly touch on a few more mathematical aspects of this extended Grassmann algebra with dual spaces. As we saw in part 2, a vector space and its dual have a "natural pairing" operation, much like an inner product, between vectors and dual vectors. This pairing extends to $k$-vectors and their duals, too. In fact, we can further extend the natural pairing to work between $k$-vectors and duals of different grades. For example, we can define a way to "pair" a dual vector $w$ with a bivector $B = u \wedge v$, yielding a vector: $$ \langle w, B \rangle = \langle w, u \rangle v - u \langle w, v \rangle $$ Geometrically, the resulting vector lies in the plane of $B$, and runs parallel to the level planes of $w$. In some sense, $w$ is "eating" the dimension of $B$ that lies along the direction of $w$'s density, and leaving the leftover dimension behind as a vector. This extended pairing operation is known as the interior product or contraction product, although different references often define it in slightly different ways (there are various conventions in the literature). I'm not going to go into it too deeply. The key point is that you can combine a $k$-vector with a dual $\ell$-vector, for any grades $k$ and $\ell$; the result will be a $(k-\ell)$-vector, interpreting negative grades as duals. In addition to the vector-space duality we've been talking about, Grassmann algebra contains another, distinct notion of duality: Hodge duality, represented by the Hodge star operator, $\star$. (Note that this is a different symbol from the asterisk $*$ used for the dual vector space!) The vector-space notion of duality relates $k$-vectors to duals of equal grade—vectors to dual vectors, bivectors to dual bivectors, and so on. Hodge duality, however, connects things to duals of a complementary grade. Applying the Hodge star to a $k$-vector produces an element of grade $n - k$, where $n$ is the dimension of space. In 3D, it interchanges vectors (grade 1) with bivectors (grade 2), and scalars (grade 0) with trivectors (grade 3). The way I'll define the Hodge star initially is a bit different than the standard way. In fact, there are actually two Hodge star operations: one that goes from $k$-vectors to dual $(n-k)$-vectors, and another that goes the other way. I'll denote these by $\star$ and $-\star$ respectively. The two are inverses of each other (in 3D, at least). They're defined as follows: $$ \begin{aligned} \star&: \textstyle\bigwedge^k V \to \textstyle\bigwedge^{n-k}V^* &&: & v^\star &= \langle {\bf e_{xyz}^*}, v \rangle \\ -\star&: \textstyle\bigwedge^k V^* \to \textstyle\bigwedge^{n-k}V &&: & v^{-\star} &= \langle v, {\bf e_{xyz}} \rangle \end{aligned} $$ The angle brackets on the right here are the interior product. What we're saying is: to do the Hodge star on a $k$-vector, we take its interior product with ${\bf e_{xyz}^*}$, the standard unit dual trivector (or, in $n$ dimensions, the unit dual $n$-vector). This results in a dual $(n-k)$-vector, which geometrically represents a density over all the dimensions not included in the original $k$-vector. Conversely, to do the anti-Hodge-star on a dual $k$-vector, we take its interior product with ${\bf e_{xyz}}$, giving an $(n-k)$-vector containing all the dimensions not represented by the original dual $k$-vector, i.e. all the dimensions perpendicular to its level sets. (These two operations are almost defined on disjoint domains, and could therefore be combined into one "smart" star that automatically knows what to do based on the type of its argument…except for the $k = 0$ case: when you hodge a scalar, does it go to a trivector, or to a dual trivector? Both are possible; that's why we need two distinct operations here.) For 3D geometry, the interesting cases are vectors interchanging with bivectors: A vector $v$ hodges to a dual bivector whose "field lines" run parallel to $v$. A bivector $B$ hodges to a dual vector whose level planes are parallel to $B$. A dual vector $w$ unhodges to a bivector parallel to $w$'s level planes. A dual bivector $D$ unhodges to a vector parallel to $D$'s field lines. Although the formal definition was somewhat involved, you can see that the geometric result of the Hodge operations is actually pretty simple. It's all about swapping between the geometry of a $k$-vector and the corresponding level-set geometry of a dual $(n-k)$-vector. The Hodge stars are a very useful tool for working with Grassmann and dual-Grassmann quantities in practice. In most treatments of Grassmann or geometric algebra, dual spaces are hardly mentioned. The more conventional definition of the Hodge star has it mapping directly between $k$-vectors and $(n-k)$-vectors—no duals in sight. How does this work? It turns out that if we have an inner product defined on our vector space, we can use it to convert back and forth between vectors and dual vectors, or $k$-vectors and their duals. So far, we haven't discussed any means of mapping individual vectors back and forth between the base and dual spaces. Although they're both vector spaces of the same dimension, there's no natural isomorphism that would enable us to map them in a non-arbitrary way. However, the presence of an inner product does pick out a specific isomorphism with the dual space: that which maps each vector $v$ to a dual vector $v^*$ that implements dotting with $v$, using the inner product. Symbolically, for all vectors $u \in V$, we have $\langle v^*, u \rangle = v \cdot u$. This can be extended to inner products and isomorphisms for all $k$-vectors as well (see Wikipedia for details). Note, however, that this map is not preserved by scaling, or by transformations in general, because $v^*$ transforms as $M^{-T}$ while $v$ transforms as $M$. With this correspondence, it becomes possible to largely ignore the existence of dual spaces and dual elements altogether—we have the fiction that they're not distinct from the base elements. In an orthonormal basis, even the coordinates of a vector and its corresponding dual will be identical. For an example of "forgetting" about duals: the Hodge star operations can be defined using the inner product to invisibly dualize their input or output as well as hodging it. Then the two Hodge stars I defined above collapse into one operation, mapping between $\bigwedge^k V$ and $\bigwedge^{n-k} V$. This is kind of a lot. We started with just vectors and normal vectors—two kinds of vector-shaped things with different rules, which was confusing enough. But now we have four: vectors, dual vectors, bivectors, and dual bivectors. And on top of that we have three scalar-shaped things, too: true unitless scalars, trivectors, and dual trivectors. Evidently, lots of people manage to get along well enough without being totally aware of all these distinctions! Even texts on Grassmann or geometric algebra may not fully delve into the "duals" story, instead treating $k$-vectors and their duals as the same thing (implicitly using the isomorphism defined above). Their differing transformation behavior becomes sort of a curiosity, an unsystematic ornamental detail. And this comes at the cost of making some aspects of the algebra require an inner product or a metric, and only work properly in an orthonormal basis. In contrast, when you're "cooking with duals", you can derive formulas that work properly in any basis. As a quick example of this, let's look at a concrete problem you might encounter in graphics. Let's say you have a triangle mesh and you want to select a random point on it, chosen uniformly over the surface area. To do this, we must first select a random triangle, with probability proportional to area. The standard technique is to precompute the areas of all the triangles and build a prefix-sum table; then, to select a triangle, we take a uniform random value and binary-search on it in the table. Let's throw in another wrinkle, though. What if the triangle mesh is transformed—possibly by a nonuniform scaling, or a shear? In general, this will alter the areas of all the triangles, in an orientation-dependent way. A uniform distribution over surface area in the mesh's local space will no longer be uniform in world space. We could address this by pre-transforming the whole mesh into world space and doing the sampling process there—but that's more expensive than necessary. We can use bivectors to help. Instead of calculating just a scalar area for each triangle, calculate the bivector representing its orientation and area. (If the triangle's vertices are $p_1, p_2, p_3$, this is $\tfrac{1}{2}(p_2 - p_1) \wedge (p_3 - p_1)$.) Now we can transform all the bivectors into world space, using their transformation rule, and they will accurately represent the areas of the transformed triangles. Then we can calculate their magnitudes and build the prefix-sum table, as before. Conversely, suppose we have an existing, non-uniform areal probability measure defined over our triangle mesh. (Maybe it's a light source with a texture defining its emissive brightness, and we want to sample with respect to emitted power; or maybe we want to sample with respect to solid angle subtended at some point, or some sort of visual importance, etc.) We can represent these probability densities as dual bivectors, and again we can take them back and forth between local and world space—even in the presence of shear or nonuniform scaling—with confidence that we're still representing the same distribution. Some other examples where dual $k$-vectors show up: The derivative (gradient) of a scalar field, such as an SDF, is naturally a dual vector. Dual vectors represent spatial frequencies (wavevectors) in Fourier analysis. The radiance carried by a ray is a density with respect to projected area, and can therefore be represented, at least in part, as a dual bivector. Like many theoretical math concepts, I think these ideas are mostly useful for enriching your own mental models of geometry, strengthening your thought process, and deriving results that you can then use in code in a more "conventional" way. I'm not necessarily suggesting we should all go off and start implementing $k$-vectors and their duals as classes in our math libraries. (Frankly, our math libraries are enough of a mess already.) One more thing to muse on before I leave you. We've seen that there is a "scaling zoo" of mathematical elements with different physical, geometric interpretations and behaviors. Different branches of science and math have distinct ways of conceptually organizing this zoo, and thinking about its denizens and their relationships. In computer science, for example, we would probably understand vectors, bivectors, dual vectors, and so forth as different types. Each might have an internal structure as a composition of more elementary values (real numbers), and a suite of allowed operations that define what you can do with them and how they interact with one another. Physicists, meanwhile, tend to take a more rough-and-ready approach: geometric elements are thought of as simply matrices of real (or sometimes complex) numbers, together with transformation laws—rules that define what happens to a given matrix under a change of coordinates. Algebraic properties such as anticommutativity are obtained by constructing the matrices in such a way that matrix multiplication implements the desired algebra. For example, a bivector can be represented as an antisymmetric matrix; wedging two vectors $u, v$ to make a bivector corresponds to calculating the matrix $$ uv^T - vu^T $$ which has the same anticommutative property as a wedge product. Multiplying this matrix by a (dual) vector $w$ then represents the interior product of the bivector with $w$. Meanwhile, a dual bivector would be structurally similar, but have a different transformation law ("covariant" versus "contravariant"). Lastly, mathematicians like to formalize things by saying that different geometric quantities are elements of different spaces and/or algebras. Both terms ultimately mean a set (in the mathematical sense), together with some extra structure—such as algebraic operations, a topology, a norm or metric, and so on—defined on top of the bare set. The exact kind of structures you need depends on what you're doing, and there's a whole menagerie of such structures that might be invoked in different contexts. So which structure is behind the scaling zoo? We know we've got the vector space structure, and the Grassmann algebraic structure. But neither of these fully accounts for the different scaling and transformation behaviors of dual elements: dual spaces are isomorphic to their base spaces (in finite dimensions), totally identical insofar as the vector and Grassmann structures are concerned. I don't have a fully developed answer yet—but I suspect it's got to do with the representation theory of Lie groups. My guess is that the different types of scaling elements we've seen can be codified as vector spaces acted on by different representations of $GL(n)$, the Lie group of all linear maps on $\Bbb R^n$. But I'm not going to get into that here. (If you'd like to read more on this, here are a couple web references: one, two. Also: Peter Woit's book on the role of representation theory in particle physics.) I hope this has been an entertaining and enlightening tour through some of the layers beneath the surface of your favorite Euclidean geometry. We started with a seemingly simple question—why do normal vectors transform using the inverse transpose matrix?—and found that there was much more rich structure there than meets the eye. The "scaling zoo" of $k$-vectors and their duals makes a pleasingly complete and symmetrical whole. Even if I'm not going to be employing these things in practical work every day, I feel that studying them has helped me understand some things that were vague and foggy in my mind before. It's worth appreciating that these subtle distinctions exist. One of my general axioms in life is that everything is more complicated than it first appears, and nowhere is this more consummately borne out than mathematics! Comments on "Normals and the Inverse Transpose, Part 3: Grassmann On Duals" Normals and the Inverse Transpose, Part 1: Grassmann Algebra Flows Along Conic Sections Conformal Texture Mapping Quadrilateral Interpolation, Part 2 Graphics(26) Coding(18) GPU(12) Eye Candy(4) © 2007–2018 by Nathan Reed. Licensed CC-BY-4.0.
CommonCrawl
Problem formulation Multi-side-view discriminative subgraph selection Conclusion and future work Bokai Cao1Email author, Xiangnan Kong2, Jingyuan Zhang1, Philip S. Yu1, 3 and Ann B. Ragin4 Brain InformaticsBrain Data Computing and Health Studies20152:23 Received: 4 October 2015 Investigating brain connectivity networks for neurological disorder identification has attracted great interest in recent years, most of which focus on the graph representation alone. However, in addition to brain networks derived from the neuroimaging data, hundreds of clinical, immunologic, serologic, and cognitive measures may also be documented for each subject. These measures compose multiple side views encoding a tremendous amount of supplemental information for diagnostic purposes, yet are often ignored. In this paper, we study the problem of subgraph selection from brain networks with side information guidance and propose a novel solution to find an optimal set of subgraph patterns for graph classification by exploring a plurality of side views. We derive a feature evaluation criterion, named gSide, to estimate the usefulness of subgraph patterns based upon side views. Then we develop a branch-and-bound algorithm, called gMSV, to efficiently search for optimal subgraph patterns by integrating the subgraph mining process and the procedure of discriminative feature selection. Empirical studies on graph classification tasks for neurological disorders using brain networks demonstrate that subgraph patterns selected by the multi-side-view-guided subgraph selection approach can effectively boost graph classification performances and are relevant to disease diagnosis. Subgraph pattern Graph mining Side information Brain network Modern neuroimaging techniques have enabled us to model the human brain as a brain connectivity network or a connectome. Rather than vector-based feature representations as traditional data, brain networks are inherently in the form of graph representations which are composed of brain regions as the nodes, e.g., insula, hippocampus, thalamus, and functional/structural connectivities between the brain regions as the links. The linkage structure in these brain networks can encode tremendous information concerning the integrated activity of the human brain. For example, in brain networks derived from functional magnetic resonance imaging (fMRI), connections/links can encode correlations between brain regions in functional activity, while structural links in diffusion tensor imaging (DTI) can capture white matter fiber pathways connecting different brain regions. The complex structures and the lack of vector representations within these graph data raise a challenge for data mining. An effective model for mining the graph data should be able to extract a set of subgraph patterns for further analysis. Motivated by such challenges, graph mining research problems, in particular graph classification, have received considerable attention in the last decade. The graph classification problem has been studied extensively. Conventional approaches focus on mining discriminative subgraphs from graph view alone. This is usually feasible for applications like molecular graph analysis, where a large set of graph instances with labels are available. For brain network analysis, however, usually we only have a small number of graph instances, ranging from 30 to 100 brain networks [19]. In these applications, the information from the graph view alone may not be sufficient for mining important subgraphs. Commonly, however, in neurological studies, hundreds of clinical, serologic, and cognitive measures are available for each subject in addition to brain networks derived from the neuroimaging data [4, 5]. These measures comprise multiple side views. This supplemental information, which is generally ignored, may contain a plurality of side views to guide the process of subgraph mining in brain networks. An example of multiple side views associated with brain networks in medical studies Despite its value and significance, the feature selection problem for graph data using auxiliary views has not been studied in this context so far. There are two major difficulties in learning from multiple side views for graph classification, as follows: 1.1 The primary view in graph representation Graph data naturally compose the primary view for graph mining problems, from which we want to select discriminative subgraph patterns for graph classification. However, it raises a challenge for data mining with the complex structures and the lack of vector representations. Conventional feature selection approaches in vector spaces usually assume that a set of features are given before conducting feature selection. In the context of graph data, however, subgraph features are embedded within the graph structures and usually it is not feasible to enumerate the full set of subgraph features for a graph dataset before feature selection. Actually, the number of subgraph features grows exponentially with the size of graphs. 1.2 The side views in vector representations In many applications, side information is available along with the graph data and usually exists in the form of vector representations. That is to say, an instance is represented by a graph and additional vector-based features at the same time. It introduces us to the problem of how to leverage the relationship between the primary graph view and a plurality of side views, and how to facilitate the subgraph mining procedure by exploring the vector-based auxiliary views. For example, in brain networks, discriminative subgraph patterns for neurological disorders indicate brain injuries associated with particular regions. Such changes can potentially express in other medical tests of the subject, e.g., clinical, immunologic, serologic, and cognitive measures. Thus, it would be desirable to select subgraph features that are consistent with these side views. Two strategies of leveraging side views in feature selection process for graph classification: late fusion and early fusion [6] Figure 2 illustrates two strategies of leveraging side views in the process of selecting subgraph patterns. Conventional graph classification approaches treat side views and subgraph patterns separately and may only combine them at the final stage of training a classifier. Obviously, the valuable information embedded in side views is not fully leveraged in the feature selection process. Most subgraph mining approaches focus on the drug discovery problem which have access to a great amount of graph data for chemical compounds. For neurological disorder identification, however, there are usually limited subjects with a small sample size of brain networks available. Therefore, it is critical to learn knowledge from other possible sources. We notice that transfer learning can borrow supervision knowledge from the source domain to help the learning on the target domain, e.g., finding a good feature representation [10], mapping relational knowledge [24, 25], and learning across graph database [29]. However, to the best of our knowledge, they do not consider transferring complementary information from vector-based side views to graph database whose instances are complex structural graphs. To solve the above problems, in this paper, we introduce a novel framework that fuses heterogeneous data sources at an early stage. In contrast to existing subgraph mining approaches that focus on a single view of the graph representation, our method can explore multiple vector-based side views to find an optimal set of subgraph features for graph classification. We first verify side information consistency via statistical hypothesis testing. Based on auxiliary views and the available label information, we design an evaluation criterion for subgraph features, named gSide. By deriving a lower bound, we develop a branch-and-bound algorithm, called gMSV, to efficiently search for optimal subgraph features with pruning, thereby avoiding exhaustive enumeration of all subgraph features. In order to evaluate our proposed model, we conduct experiments on graph classification tasks for neurological disorders, using fMRI and DTI brain networks. The experiments demonstrate that our subgraph selection approach using multiple side views can effectively boost graph classification performances. Moreover, we show that gMSV is more efficient by pruning the subgraph search space via gSide. 2 Problem formulation A motivation for this work is the premise that side information could be strongly correlated with neurological status. Before presenting the subgraph feature selection model, we first introduce the notations that will be used throughout this paper. Let \({\mathcal{D}}=\{G_1,\ldots ,G_n\}\) denote the graph dataset, which consists of n graph objects. The graphs within \({\mathcal{D}}\) are labeled by \([y_1,\ldots ,y_n]^\top\), where \(y_i\in \{-1,+1\}\) denotes the binary class label of \(G_i\). (Graph) A graph is represented as \(G =(V,E)\), where \(V=\{v_1,\ldots ,v_{n_v}\}\) is the set of vertices, \(E\subseteq V\times V\) is the set of edges. (Subgraph) Let \(G'=(V',E')\) and \(G=(V,E)\) be two graphs. \(G'\) is a subgraph of G (denoted as \(G'\subseteq G\)) iff \(V'\subseteq V\) and \(E'\subseteq E\). If \(G'\) is a subgraph of G, then G is supergraph of \(G'\). (Side view) A side view is a set of vector-based features \({\mathbf{z}}_i=[z_1,\ldots ,z_d]^\top\) associated with each graph object \(G_i\), where d is the dimensionality of this view. A side view is denoted as \({\mathcal{Z}}=\{{\mathbf{z}}_1,\ldots ,{\mathbf{z}}_n\}\). We assume that multiple side views \(\{{\mathcal{Z}}^{(1)},\ldots ,{\mathcal{Z}}^{(v)}\}\) are available along with the graph dataset \({\mathcal{D}}\), where v is the number of side views. We employ kernels \(\kappa ^{(p)}\) on \({\mathcal{Z}}^{(p)}\), such that \(\kappa ^{(p)}_{ij}\) represents the similarity between \(G_i\) and \(G_j\) from the perspective of the p-th view. The RBF kernel is used as the default kernel in this paper, unless otherwise specified: $$\kappa ^{(p)}_{ij}={\text{exp}} \left( -\frac{\Vert {\mathbf{z}}_i^{(p)}-{\mathbf{z}}_j^{(p)}\Vert _2^2}{d^{(p)}}\right)$$ In this paper, we adopt the idea of subgraph-based graph classification approaches, which assume that each graph object \(G_j\) is represented as a binary vector \({\mathbf{x}}_j=[x_{1j},\ldots ,x_{mj}]^\top\) associated with the full set of subgraph patterns \(\{g_1,\ldots ,g_m\}\) for the graph dataset \(\{G_1,\ldots ,G_n\}\). Here \(x_{ij}\in \{0,1\}\) is the binary feature of \(G_j\) corresponding to the subgraph pattern \(g_i\), and \(x_{ij}=1\) iff \(g_i\) is a subgraph of \(G_j\) (\(g_i\subseteq G_j\)), otherwise \(x_{ij}=0\). Let \(X=[x_{ij}]^{m\times n}\) denote the matrix consisting of binary feature vectors using \({\mathcal{S}}\) to represent the graph dataset \({\mathcal{D}}\). \(X=[{\mathbf{x}}_1,\ldots ,{\mathbf{x}}_n]=[{\mathbf{f}}_1,\ldots ,{\mathbf{f}}_m]^\top \in \{0,1\}^{m\times n}\). The full set \({\mathcal{S}}\) is usually too large to be enumerated. There is usually only a subset of subgraph patterns \({\mathcal{T}}\subseteq {\mathcal{S}}\) relevant to the task of graph classification. We briefly summarize the notations used in this paper in Table 1. The key issue of discriminative subgraph selection using multiple side views is how to find an optimal set of subgraph patterns for graph classification by exploring the auxiliary views. This is non-trivial due to the following problems: How to leverage the valuable information embedded in multiple side views to evaluate the usefulness of a set of subgraph patterns? How to efficiently search for the optimal subgraph patterns without exhaustive enumeration in the primary graph space? In the following sections, we will first introduce the optimization framework for selecting discriminative subgraph features using multiple side views. Next, we will describe our subgraph mining strategy using the evaluation criterion derived from the optimization solution. Important notations Definition and description |.| Cardinality of a set \(\Vert . \Vert\) Norm of a vector \({\mathcal{D}}=\{G_1,\ldots ,G_n\}\) Given graph dataset, \(G_i\) denotes the i-th graph in the dataset \({\mathbf{y}}=[y_1,\ldots ,y_n]^\top\) Class label vector for graphs in \({\mathcal{D}}\), \(y_i\in \{-1,+1\}\) \({\mathcal{S}}=\{g_1,\ldots ,g_m\}\) Set of all subgraph patterns in the graph dataset \({\mathcal{D}}\) \({\mathbf{f}}_i=[f_{i1},\ldots ,f_{in}]^\top\) Binary vector for subgraph pattern \(g_i\), \(f_{ij}=1\) iff \(g_i\subseteq G_j\), otherwise \(f_{ij}=0\) \({\mathbf{x}}_j=[x_{1j},\ldots ,x_{mj}]^\top\) Binary vector for \(G_j\) using subgraph patterns in \({\mathcal{S}}\), \(x_{ij}=1\) iff \(g_i\subseteq G_j\), otherwise \(x_{ij}=0\) \(X=[x_{ij}]^{m\times n}\) Matrix of all binary vectors in the dataset, \(X=[{\mathbf{x}}_1,\ldots ,{\mathbf{x}}_n]=[{\mathbf{f}}_1,\ldots ,{\mathbf{f}}_m]^\top \in \{0,1\}^{m\times n}\) \({\mathcal{T}}\) Set of selected subgraph patterns, \({\mathcal{T}}\subseteq {\mathcal{S}}\) \({\mathcal{I}}_{\mathcal{T}}\in \{0,1\}^{m\times m}\) Diagonal matrix indicating which subgraph patterns are selected from \({\mathcal{S}}\) into \({\mathcal{T}}\) min_sup Minimum frequency threshold; frequent subgraphs are contained by at least min_sup \(\times |{\mathcal{D}}|\) graphs Number of subgraph patterns to be selected \(\lambda ^{(p)}\) Weight of the p-th side view (default: 1) \(\kappa ^{(p)}\) Kernel function on the p-th side view (default: RBF kernel) Demographic characteristics Age (mean years \(\pm\) SD) 33.3 \(\pm\) 10.1 31.4 \(\pm\) 8.9 Gender (% male) Race (% white) Education (% college) 3 Data analysis A motivation for this work is that the side information could be strongly correlated with the health state of a subject. Before proceeding, we first introduce real-world data used in this work and investigate whether the available information from side views has any potential impact on neurological disorder identification. 3.1 Data collections In this paper, we study the real-world datasets collected from the Chicago Early HIV Infection Study at Northwestern University [27]. The clinical cohort includes 56 HIV (positive) and 21 seronegative controls (negative). Demographic information is presented in Table 2. HIV and seronegative groups did not differ in age, gender, racial composition or education level. More detailed information about data acquisition can be found in [5]. The datasets contain functional magnetic resonance imaging (fMRI) and diffusion tensor imaging (DTI) for each subject, from which brain networks can be constructed, respectively. For fMRI data, we used DPARSF toolbox1 to extract a sequence of responds from each of the 116 anatomical volumes of interest (AVOI), where each AVOI represents a different brain region. The correlations of brain activities among different brain regions are computed. Positive correlations are used as links among brain regions. For details, functional images were realigned to the first volume, slice timing corrected, and normalized to the MNI template and spatially smoothed with an 8-mm Gaussian kernel. The linear trend of time series and temporally band-pass filtering (0.01–0.08 Hz) were removed. Before the correlation analysis, several sources of spurious variance were also removed from the data through linear regression: (i) six parameters obtained by rigid body correction of head motion, (ii) the whole-brain signal averaged over a fixed region in atlas space, (iii) signal from a ventricular region of interest, and (iv) signal from a region centered in the white matter. Each brain is represented as a graph with 90 nodes corresponding to 90 cerebral regions, excluding 26 cerebellar regions. For DTI data, we used FSL toolbox2 to extract the brain networks. The processing pipeline consists of the following steps: (i) correct the distortions induced by eddy currents in the gradient coils and use affine registration to a reference volume for head motion, (ii) delete non-brain tissue from the image of the whole head [15, 30], (iii) fit the diffusion tensor model at each voxel, (iv) build up distributions on diffusion parameters at each voxel, and (v) repetitively sample from the distributions of voxel-wise principal diffusion directions. As with the fMRI data, the DTI images were parcellated into 90 regions (45 for each hemisphere) by propagating the Automated Anatomical Labeling (AAL) to each image [34]. Min-max normalization was applied on link weights. In addition, for each subject, hundreds of clinical, imaging, immunologic, serologic, and cognitive measures were documented. Seven groups of measurements were investigated in our datasets, including neuropsychological tests, flow cytometry, plasma luminex, freesurfer, overall brain microstructure, localized brain microstructure, brain volumetry. Each group can be regarded as a distinct view that partially reflects subject status, and measurements from different medical examinations can provide complementary information. Moreover, we preprocessed the features by min-max normalization before employing the RBF kernel on each view. Hypothesis testing results (p values) to verify side information consistency Side views fMRI dataset DTI dataset Neuropsychological tests 1.3220e−20 Plasma luminex Freesurfer Overall brain microstructure Localized brain microstructure Brain volumetry 3.2 Verifying side information consistency We study the potential impact of side information on selecting subgraph patterns via statistical hypothesis testing. Side information consistency suggests that the similarity of side view features between instances with the same label should have higher probability to be larger than that with different labels. We use hypothesis testing to validate whether this statement holds in the fMRI and DTI datasets. For each side view, we first construct two vectors \({\mathbf{a}}_s^{(p)}\) and \({\mathbf{a}}_d^{(p)}\) with an equal number of elements, sampled from the sets \({\mathcal{A}}_s^{(p)}\) and \({\mathcal{A}}_d^{(p)}\), respectively: $${\mathcal{A}}_s^{(p)}=\{\kappa ^{(p)}_{ij}|y_iy_j=1\}$$ $${\mathcal{A}}_d^{(p)}=\{\kappa ^{(p)}_{ij}|y_iy_j=-1\}$$ Then, we form a two-sample one-tail t test to validate the existence of side information consistency. We test whether there is sufficient evidence to support the hypothesis that the similarity score in \({\mathbf{a}}_s^{(p)}\) is larger than that in \({\mathbf{a}}_d^{(p)}\). The null hypothesis is \(H_0: \mu _s^{(p)}-\mu _d^{(p)}\le 0\), and the alternative hypothesis is \(H_1: \mu _s^{(p)}-\mu _d^{(p)}>0\), where \(\mu _s^{(p)}\) and \(\mu _d^{(p)}\) represent the sample means of similarity scores in the two groups, respectively. The t test results, p values, are summarized in Table 3. The results show that there is strong evidence, with significance level \(\alpha =0.05\), to reject the null hypothesis on the two datasets. In other words, we validate the existence of side information consistency in neurological disorder identification, thereby paving the way for our next study of leveraging multiple side views for discriminative subgraph selection. 4 Multi-side-view discriminative subgraph selection In this section, we address the first problem discussed in Sect. 2 by formulating the discriminative subgraph selection problem as a general optimization framework as follows: $${\mathcal{T}}^*={\mathrm{argmin}}_{{\mathcal{T}}\subseteq {\mathcal{S}}}{\mathcal{F}}({\mathcal{T}}) \quad {\text{s.t.}} \,|{\mathcal{T}}|\le k$$ where \(|\cdot |\) denotes the cardinality and k is the maximum number of feature selected. \({\mathcal{F}}({\mathcal{T}})\) is the evaluation criterion to estimate the score (can be the lower the better in this paper) of a subset of subgraph patterns \({\mathcal{T}}\). \({\mathcal{T}}^*\) denotes the optimal set of subgraph patterns \({\mathcal{T}}^*\subseteq {\mathcal{S}}\). 4.1 Exploring multiple side views: gSide Following the observations in Sect. 3.2 that the side view information is clearly correlated with the prespecified label information, we assume that the set of optimal subgraph patterns should have the following properties. The similarity/distance between instances in the space of subgraph features should be consistent with that in the space of a side view. That is to say, if two instances are similar in the space of the p-th view (i.e., a high \(\kappa ^{(p)}_{ij}\) value), they should also be close to each other in the space of subgraph features (i.e., a small distance between subgraph feature vectors). On the other hand, if two instances are dissimilar in the space of the p-th view (i.e., a low \(\kappa ^{(p)}_{ij}\) value), they should be far away from each other in the space of subgraph features (i.e., a large distance between subgraph feature vectors). Therefore, our objective function could be to minimize the distance between subgraph features of each pair of similar instances in each side view, and maximize the distance between dissimilar instances. This idea is formulated as follows: $$\mathop {\text {argmin}}\limits_{ {\mathcal{T}}\subseteq{\mathcal{S}}}\frac{1}{2}\sum _{p=1}^v\lambda ^{(p)}\sum _{i,j=1}^n \Vert {\mathcal{I}}_{\mathcal{T}}{\mathbf{x}}_i-{\mathcal{I}}_{\mathcal{T}}{\mathbf{x}}_j\Vert ^2_2\Theta ^{(p)}_{ij}$$ where \({\mathcal{I}}_{\mathcal{T}}\) is a diagonal matrix indicating which subgraph features are selected into \({\mathcal{T}}\) from \({\mathcal{S}}\), \(({\mathcal{I}}_{\mathcal{T}})_{ii}=1\) iff \(g_i\in {\mathcal{T}}\), otherwise \(({\mathcal{I}}_{\mathcal{T}})_{ii}=0\). The parameters \(\lambda ^{(p)}\ge 0\) are employed to control the contributions from each view. $$\Theta_{ij}^{(p)} = \left\{ \begin{array}{ll} \frac{1}{|{\mathcal{H}}^{(p)}|} \, &{}\,(i,j)\in {\mathcal{H}}^{(p)}\\ -\frac{1}{|{\mathcal{L}}^{(p)}|}\,&{}\,(i,j)\in {\mathcal{L}}^{(p)} \end{array} \right.$$ where \({\mathcal{H}}^{(p)}=\{(i,j)|\kappa ^{(p)}_{ij}\ge \mu ^{(p)}\}\), \({\mathcal{L}}^{(p)}=\{(i,j)|\kappa ^{(p)}_{ij}<\mu ^{(p)}\}\), and \(\mu ^{(p)}\) is the mean value of \(\kappa ^{(p)}_{ij}\), i.e., \(\frac{1}{n^2}\sum _{i,j=1}^n\kappa ^{(p)}_{ij}\). This normalization is to balance the effect of similar instances and dissimilar instances. Intuitively, Eq. (5) will minimize the distance between subgraph features of similar instance-pairs with \(\kappa ^{(p)}_{ij}\ge \mu ^{(p)}\), while maximizing the distance between dissimilar instance-pairs with \(\kappa ^{(p)}_{ij}<\mu ^{(p)}\) in each view. In this way, the side view information is effectively used to guide the process of discriminative subgraph selection. The fact verified in Sect. 3.2 that the side view information is clearly correlated with the prespecified label information can be very useful, especially in the semi-supervised setting. With prespecified information for labeled graphs, we further consider that the optimal set of subgraph patterns should satisfy the following constraints: labeled graphs in the same class should be close to each other; labeled graphs in different classes should be far away from each other. Intuitively, these constraints tend to select the most discriminative subgraph patterns based on the graph labels. Such an idea has been well explored in the context of dimensionality reduction and feature selection [2, 32]. The constraints above can be mathematically formulated as minimizing the loss function: $$\mathop {\text {argmin}}\limits_{ {\mathcal{T}}\subseteq{\mathcal{S}}}\frac{1}{2}\sum _{i,j=1}^n\Vert {\mathcal{I}}_{\mathcal{T}}{\mathbf{x}}_i-{\mathcal{I}}_{\mathcal{T}}{\mathbf{x}}_j\Vert ^2_2\Omega _{ij}$$ $$\Omega _{ij} = \left\{ \begin{array}{ll} \frac{1}{|{\mathcal{M}}|}\,&{}\,(i,j)\in {\mathcal{M}}\\ -\frac{1}{|{\mathcal{C}}|}\,&{}\,(i,j)\in {\mathcal{C}}\\ 0\,&{}\,{\text{otherwise}} \end{array} \right.$$ and \({\mathcal{M}}=\{(i,j)|y_i y_j=1\}\) denotes the set of pairwise constraints between graphs with the same label, and \({\mathcal{C}}=\{(i,j)|y_i y_j=-1\}\) denotes the set of pairwise constraints between graphs with different labels. By defining matrix \(\Phi \in {\mathbb{R}}^{n\times n}\) as $$\Phi _{ij}=\Omega _{ij}+\sum _{p=1}^v\lambda ^{(p)}\Theta ^{(p)}_{ij}$$ we can combine and rewrite the function in Eq. (5) and Eq. (7) as $$\begin{aligned} {\mathcal{F}}({\mathcal{T}})&=\frac{1}{2}\sum _{i=1}^n\sum _{j=1}^n \Vert {\mathcal{I}}_{\mathcal{T}}{\mathbf{x}}_i-{\mathcal{I}}_{\mathcal{T}}{\mathbf{x}}_j\Vert ^2_2\Phi _{ij} \\& = {\text{tr}} ({\mathcal{I}}^{\top }_{\mathcal{T}}X(D-\Phi )X^{\top }{\mathcal{I}}_{\mathcal{T}}) \\& = {\text{tr}} ({\mathcal{I}}^{\top }_{\mathcal{T}}XLX^{\top }{\mathcal{I}}_{\mathcal{T}}) \\&=\sum _{g_i\in {\mathcal{T}}}{\mathbf {f}}_i^\top L{\mathbf{f}}_i \end{aligned}$$ where \({\text{tr}} (\cdot )\) is the trace of a matrix, D is a diagonal matrix whose entries are column sums of \(\Phi\), i.e., \(D_{ii}=\sum _{j}\Phi _{ij}\), and \(L=D-\Phi\) is a Laplacian matrix. (gSide)Let \('{\mathcal{D}}=\{G_1,\ldots ,G_n\}\) denote a graph dataset with multiple side views. Suppose \(\Phi\) is a matrix defined as Eq. (9), and L is a Laplacian matrix defined as \(L=D-\Phi\), where D is a diagonal matrix, \(D_{ii}=\sum _{j}\Phi _{ij}\). We define an evaluation criterion q, called gSide, for a subgraph pattern \(g_i\) as $$q(g_i)={\mathbf{f}}_i^\top L {\mathbf{f}}_i$$ where \({\mathbf{f}}_i=[f_{i1},\ldots ,f_{in}]^\top \in \{0,1\}^n\) is the indicator vector for subgraph pattern \(g_i\), \(f_{ij}=1\) iff \(g_i\subseteq G_j\), otherwise \(f_{ij}=0\). Since the Laplacian matrix L is positive semi-definite, for any subgraph pattern \(g_i\), \(q(g_i)\ge 0\). Based on gSide as defined above, the optimization problem in Eq. (4) can be written as $${\mathcal{T}}^*= \mathop {\text {argmin}}\limits_{ {\mathcal{T}}\subseteq{\mathcal{S}}}\sum _{g_i \in {\mathcal{T}}}q(g_i) \quad {\text{s.t.}}\,|{\mathcal{T}}|\le k$$ The optimal solution to the problem in Eq. (12) can be found by using gSide to conduct feature selection on a set of subgraph patterns in \({\mathcal{S}}\). Suppose the gSide values for all subgraph patterns are denoted as \(q(g_1)\le \cdots \le q(g_m)\) in sorted order, then the optimal solution to the optimization problem in Eq. (12) is $${\mathcal{T}}^*= {\cup }_{i=1}^k\{g_i\}$$ 4.2 Searching with a lower bound: gMSV Now we address the second problem discussed in Sect. 2, and propose an efficient method to find the optimal set of subgraph patterns from a graph dataset with multiple side views. A straightforward solution to the goal of finding an optimal feature set is the exhaustive enumeration, i.e., we could first enumerate all subgraph patterns from a graph dataset, and then calculate the gSide values for all subgraph patterns. In the context of graph data, however, it is usually not feasible to enumerate the full set of subgraph patterns before feature selection. Actually, the number of subgraph patterns grows exponentially with the size of graphs. Inspired by recent advances in graph classification approaches [7, 20, 21, 37], which nest their evaluation criteria into the subgraph mining process and develop constraints to prune the search space, we adopt a similar approach by deriving a different constraint based upon gSide. By adopting the gSpan algorithm proposed by Yan and Han [38], we can enumerate all the subgraph patterns for a graph dataset in a canonical search space. In order to prune the subgraph search space, we now derive a lower bound of the gSide value: Theorem 1 Given any two subgraph patterns \(g_i,g_j\in {\mathcal{S}}\), \(g_j\) is a supergraph of \(g_i\), i.e., \(g_i\subseteq g_j\). The gSide value of \(g_j\) is bounded by \({\hat{q}}(g_i)\), i.e., \(q(g_j)\ge {\hat{q}}(g_i)\). \({\hat{q}}(g_i)\) is defined as $${\hat{q}}(g_i)\triangleq {\mathbf{f}}_i^\top {\hat{L}} {\mathbf{f}}_i$$ where the matrix \({\hat{L}}\) is defined as \({\hat{L}}_{pq}\,\triangleq \,\min (0,L_{pq})\). According to Definition 4, $$q(g_j) = {\mathbf{f}}_j^\top L {\mathbf{f}}_j=\sum _{p,q:G_p,G_q\in {\mathcal{G}} (g_j)}L_{pq}$$ where \({\mathcal{G}} (g_j)\,\triangleq\, \{G_k|g_j\subseteq G_k,1\le k\le n\}\). Since \(g_i\,\subseteq\, g_j\), according to anti-monotonic property, we have \({\mathcal{G}} (g_j)\,\subseteq\, {\mathcal{G}} (g_i)\). Also \({\hat{L}}_{pq}\,\triangleq\, \min (0,L_{pq})\), we have \({\hat{L}}_{pq}\le L_{pq}\) and \({\hat{L}}_{pq}\le 0\). Therefore, $$\begin{aligned} q(g_j)&=\sum _{p,q:G_p,G_q\in {\mathcal{G}} (g_j)}L_{pq}\ge \sum _{p,q:G_p,G_q\in {\mathcal{G}} (g_j)} {\hat{L}}_{pq} \\&\ge \sum _{p,q:G_p,G_q\in {\mathcal{G}} (g_i)} {\hat{L}}_{pq} = {\hat{q}}(g_i) \end{aligned}$$ Thus, for any \(g_i\subseteq g_j\), \(q(g_j)\ge {\hat{q}}(g_i)\). □ We can now nest the lower bound into the subgraph mining steps in gSpan to efficiently prune the DFS code tree. During the depth-first search through the DFS code tree, we always maintain the currently top-k best subgraph patterns according to gSide and the temporally suboptimal gSide value (denoted by \(\theta\)) among all the gSide values calculated before. If \({\hat{q}}(g_i)\ge \theta\), the gSide value of any supergraph \(g_j\) of \(g_i\) should be no less than \({\hat{q}}(g_i)\) according to Theorem 1, i.e., \(q(g_j)\ge {\hat{q}}(g_i)\ge \theta\). Thus, we can safely prune the subtree rooted from \(g_i\) in the search space. If \({\hat{q}}(g_i)<\theta\), we cannot prune this subtree since there might exist a supergraph \(g_j\) of \(g_i\) such that \(q(g_j)<\theta\). As long as a subgraph \(g_i\) can improve the gSide values of any subgraphs in \({\mathcal{T}}\), it is added into \({\mathcal{T}}\) and the least best subgraph is removed from \({\mathcal{T}}\). Then we recursively search for the next subgraph in the DFS code tree. The branch-and-bound algorithm gMSV is summarized in Algorithm 1. In order to evaluate the performance of the proposed solution to the problem of feature selection for graph classification using multiple side views, we tested our algorithm on brain network datasets derived from neuroimaging, as introduced in Sect. 3.1. 5.1 Experimental setup To the best of our knowledge, this paper is the first work on leveraging side information in feature selection problem for graph classification. In order to evaluate the performance of the proposed method, we compare our method with other methods using different statistical measures and discriminative score functions. For all the compared methods, gSpan [38] is used as the underlying searching strategy. Note that although alternative algorithms are available [17, 18, 37], the search step efficiency is not the focus of this paper. The compared methods are summarized as follows: gMSV: The proposed discriminative subgraph selection method using multiple side views. Following the observation in Sect. 3.2 that side information consistency is verified to be significant in all the side views, the parameters in gMSV are simply set to \(\lambda ^{(1)}=\cdots =\lambda ^{(v)}=1\) for experimental purposes. In the case where some side views are suspect to be redundant, we can adopt the alternative optimization strategy to iteratively select discriminative subgraph patterns and update view weights. gSSC: A semi-supervised feature selection method for graph classification based upon both labeled and unlabeled graphs. The parameters in gSSC are set to \(\alpha =\beta =1\) unless otherwise specified [21]. Discriminative Subgraphs (Conf, Ratio, Gtest, HSIC): Supervised feature selection methods for graph classification based upon confidence [12], frequency ratio [16–18], G test score [37], and HSIC [20], respectively. The top-k discriminative subgraph features are selected in terms of different discrimination criteria. Frequent Subgraphs (Freq): In this approach, the evaluation criterion for subgraph feature selection is based upon frequency. The top-k frequent subgraph features are selected. We append the side view data to the subgraph-based graph representations computed by the above algorithms before feeding the concatenated feature vectors to the classifier. Another baseline that only uses side view data is denoted as MSV. For a fair comparison, we used LibSVM [9] with linear kernel as the base classifier for all the compared methods. In the experiments, 3-fold cross validations were performed on balanced datasets. To get the binary links, we performed simple thresholding over the weights of the links. The threshold for fMRI and DTI datasets was 0.9 and 0.3, respectively. Classification performance on the fMRI dataset with different numbers of features. Classification performance on the DTI dataset with different numbers of features 5.2 Performance on graph classification The experimental results on fMRI and DTI datasets are shown in Figs. 3 and 4, respectively. The average performances with different numbers of features of each method are reported. Classification accuracy is used as the evaluation metric. In Fig, 3, our method gMSV can achieve the classification accuracy as high as 97.16% on the fMRI dataset, which is significantly better than the union of other subgraph-based features and side view features. The black solid line denotes the method MSV, the simplest baseline that uses only side view data. Conf and Ratio can do slightly better than MSV. Freq adopts an unsupervised process for selecting subgraph patterns, resulting in a comparable performance with MSV, indicating that there is no additional information from the selected subgraphs. Other methods that use different discrimination scores without leveraging the guidance from side views perform even worse than MSV in graph classification, because they evaluate the usefulness of subgraph patterns solely based on the limited label information from a small sample size of brain networks. The selected subgraph patterns can potentially be redundant or irrelevant, thereby compromising the effects of side view data. Importantly, gMSV outperforms the semi-supervised approach gSSC which explores the unlabeled graphs based on the separability property. This indicates that rather than simply considering that unlabeled graphs should be separated from each other, it would be better to regularize such separability/closeness to be consistent with the available side views. Similar observations are found in Fig. 4, where gMSV outperforms other baselines by achieving a good performance as high as 97.33% accuracy on the DTI dataset. We notice that only gMSV is able to do better than MSV by adding complementary subgraph-based features to the side view features. Moreover, the performances of other schemes are not consistent over the two datasets. The 2nd and 3rd best schemes, Conf and Ratio, for fMRI do not perform as well for DTI. These results support our premise that exploring a plurality of side views can boost the performance of graph classification, and the gSide evaluation criterion in gMSV can find more informative subgraph patterns for graph classification than subgraphs based on frequency or other discrimination scores. Average CPU time for pruning versus unpruning with varying min_sup Average number of subgraph patterns explored in the mining procedure for pruning versus unpruning with varying min_sup 5.3 Time and space complexity Next, we evaluate the effectiveness of pruning the subgraph search space by adopting the lower bound of gSide in gMSV. In this section, we compare the runtime performance of two implementation versions of gMSV: the pruning gMSV uses the lower bound of gSide to prune the search space of subgraph enumerations, as shown in Algorithm 1; the unpruning gMSV denotes the method without pruning in the subgraph mining process, e.g., deleting the line 13 in Algorithm 1. We test both approaches and recorded the average CPU time used and the average number of subgraph patterns explored during the procedure of subgraph mining and feature selection. The comparisons with respect to the time complexity and the space complexity are shown in Figs. 5 and 6, respectively. On both datasets, the unpruning gMSV needs to explore exponentially larger subgraph search space as we decrease the min_sup value in the subgraph mining process. When the min_sup value is too low, the subgraph enumeration step in the unpruning gMSV can run out of the memory. However, the pruning gMSV is still effective and efficient when the min_sup value goes to very low, because its running time and space requirement do not increase as much as the unpruning gMSV by reducing the subgraph search space via the lower bound of gSide. The focus of this paper is to investigate side information consistency and explore multiple side views in discriminative subgraph selection. As potential alternatives to the gSpan-based branch-and-bound algorithm, we could employ other more sophisticated searching strategies with our proposed multi-side-view evaluation criterion, gSide. For example, we can replace with gSide the G test score in LEAP [37] or the log ratio in COM [17] and GAIA [18], etc. However, as shown in Figs. 5 and 6, our proposed solution with pruning, gMSV, can survive at \(min\_sup=4\%\); considering the limited number of subjects in medical experiments as introduced in Sect. 3.1, gMSV is efficient enough for neurological disorder identification where subgraph patterns with too few supported graphs are not desired. 5.4 Effects of side views In this section, we investigate contributions from different side views. The well-known precision, recall, and F1 are used as metrics. Precision is the fraction of positive predictions that are positive subjects. Recall is the fraction of positive subjects that are predicted as positive. F-measure is the harmonic mean of precision and recall. Table 4 shows performance of gMSV on the fMRI dataset by considering only one side view each time. In general, the best performance is achieved by simultaneously exploring all side views. Specifically, we observe that the side view flow cytometry can independently provide the most informative side information for selecting discriminative subgraph patterns on the fMRI brain networks. This is plausible as it implies that HIV brain alterations in terms of functional connectivity are most likely to express from this side view (i.e., in measures of immune function, the HIV hallmark). It is consistent with our finding in Sect. 3.2 that the side view flow cytometry is the most significantly correlated with the prespecified label information. Similar results on the DTI dataset are shown in Table 5. Average classification performances of gMSV on the fMRI dataset with different single-side views All side views Average classification performances of gMSV on the DTI dataset with different single-side views 5.5 Feature evaluation Figures 7 and 8 display the most discriminative subgraph patterns selected by gMSV from the fMRI dataset and the DTI dataset, respectively. These findings examining functional and structural networks are consistent with other in vivo studies [8, 35] and with the pattern of brain injury at autopsy [11, 23] in HIV infection. With the approach presented in this analysis, alterations in the brain can be detected in initial stages of injury and in the context of clinically meaningful information, such as host immune status and immune response (flow cytometry), immune mediators (plasma luminex) and cognitive function (neuropsychological tests). This approach optimizes the valuable information inherent in complex clinical datasets. Strategies for combining various sources of clinical information have promising potential for informing an understanding of disease mechanisms, for identification of new therapeutic targets and for discovery of biomarkers to assess risk and to evaluate response to treatment. Discriminative subgraph patterns that are associated with HIV, selected from the fMRI dataset Discriminative subgraph patterns that are associated with HIV, selected from the DTI dataset 6 Related work To the best of our knowledge, this paper is the first work exploring side information in the task of subgraph feature selection for graph classification. Our work is related to subgraph mining techniques and multi-view feature selection problems. We briefly discuss both of them. Mining subgraph patterns from graph data has been studied extensively by many researchers. In general, a variety of filtering criteria are proposed. A typical evaluation criterion is frequency, which aims at searching for frequently appearing subgraph features in a graph dataset satisfying a prespecified min_sup value. Most of the frequent subgraph mining approaches are unsupervised. For example, Yan and Han developed a depth-first search algorithm: gSpan [38]. This algorithm builds a lexicographic order among graphs, and maps each graph to an unique minimum DFS code as its canonical label. Based on this lexicographic order, gSpan adopts the depth-first search strategy to mine frequent-connected subgraphs efficiently. Many other approaches for frequent subgraph mining have also been proposed, e.g., AGM [14], FSG [22], MoFa [3], FFSM [13], and Gaston [26]. Moreover, the problem of supervised subgraph mining has been studied in recent work which examines how to improve the efficiency of searching the discriminative subgraph patterns for graph classification. Yan et al. introduced two concepts structural leap search and frequency-descending mining, and proposed LEAP [37] which is one of the first works in discriminative subgraph mining. Thoma et al. proposed CORK which can yield a near-optimal solution using greedy feature selection [33]. Ranu and Singh proposed a scalable approach, called GraphSig, that is capable of mining discriminative subgraphs with a low frequency threshold [28]. Jin et al. proposed COM which takes into account the co-occurrences of subgraph patterns, thereby facilitating the mining process [17]. Jin et al. further proposed an evolutionary computation method, called GAIA, to mine discriminative subgraph patterns using a randomized searching strategy [18]. Our proposed criterion gSide can be combined with these efficient searching algorithms to speed up the process of mining discriminative subgraph patterns by substituting the G test score in LEAP [37] or the log ratio in COM [17] and GAIA [18], etc. Zhu et al. designed a diversified discrimination score based on the log ratio which can reduce the overlap between selected features by considering the embedding overlaps in the graphs [39]. Similar idea can be integrated into gSide to improve feature diversity. There are some recent works on incorporating multi-view learning and feature selection. Tang et al. studied unsupervised multi-view feature selection by constraining that similar data instances from each view should have similar pseudo-class labels [31]. Cao et al. explored tensor product to bring different views together in a joint space and presents a dual method of tensor-based multi-view feature selection [4]. Aggarwal et al. considered side information for text mining [1]. However, these methods are limited in requiring a set of candidate features as input, and therefore are not directly applicable for graph data. Wu et al. considered the scenario where one object can be described by multiple graphs generated from different feature views and proposes an evaluation criterion to estimate the discriminative power and the redundancy of subgraph features across all views [36]. In contrast, in this paper, we assume that one object can have other data representations of side views in addition to the primary graph view. In the context of graph data, the subgraph features are embedded within the complex graph structures and usually it is not feasible to enumerate the full set of features for a graph dataset before the feature selection. Actually, the number of subgraph features grows exponentially with the size of graphs. In this paper, we explore the side information from multiple views to effectively facilitate the procedure of discriminative subgraph mining. Our proposed feature selection for graph data is integrated to the subgraph mining process, which can efficiently prune the search space, thereby avoiding exhaustive enumeration of all subgraph features. 7 Conclusion and future work We presented an approach for selecting discriminative subgraph features using multiple side views. By leveraging available information from multiple side views together with graph data, the proposed method gMSV can achieve very good performance on the problem of feature selection for graph classification, and the selected subgraph patterns are relevant to disease diagnosis. This approach has broad applicability for yielding new insights into brain network alterations in neurological disorders and for early diagnosis. A potential extension to our method is to combine fMRI and DTI brain networks to find discriminative subgraph patterns in the sense of both functional and structural connections. Other extensions include better exploring weighted links in the multi-side-view setting. It is also interesting to have our model applied to other domains where one can find graph data and side information aligned with the graph. For example, in bioinformatics, chemical compounds can be represented by graphs based on their inherent molecular structures and are associated with properties such as drug repositioning, side effects, ontology annotations. Leveraging all these information to find out discriminative subgraph patterns can be transformative for drug discovery. http://rfmri.org/DPARSF. http://fsl.fmrib.ox.ac.uk/fsl/fslwiki. This work is supported in part by NSF through grants III-1526499, CNS-1115234, and OISE-1129076, Google Research Award, the Pinnacle Lab at Singapore Management University, and NIH through grant R01-MH080636. Department of Computer Science, University of Illinois at Chicago, Chicago, IL, USA Department of Computer Science, Worcester Polytechnic Institute, Worcester, MA, USA Institute for Data Science, Tsinghua University, Beijing, China Department of Radiology, Northwestern University, Chicago, IL, USA Aggarwal CC, Zhao Y, Yu PS (2012) On the use of side information for mining text data. TKDE pp 1–1Google Scholar Bar-Hillel A, Hertz T, Shental N, Weinshall D (2005) Learning a mahalanobis metric from equivalence constraints. J Mach Learn Res 6(6):937–965MATHMathSciNetGoogle Scholar Borgelt C, Berthold MR (2002) Mining molecular fragments: finding relevant substructures of molecules. In: IEEE ICDM, pp 51–58Google Scholar Cao B, He L, Kong X, Yu PS, Hao Z, Ragin AB (2014) Tensor-based multi-view feature selection with applications to brain diseases. In: IEEE ICDM, pp 40–49Google Scholar Cao B, Kong X, Kettering C, Yu PS, Ragin AB (2015) Determinants of HIV-induced brain changes in three different periods of the early clinical course: a data mining analysis. NeuroImageGoogle Scholar Cao B, Kong X, Yu PS (2015) A review of heterogeneous data mining for brain disorder identification. Brain Informatics. doi:10.1007/s40708-015-0021-3 Cao B, Zhan L, Kong X, Yu PS, Vizueta N, Altshuler LL, Leow AD (2015) Identification of discriminative subgraph patterns in fMRI brain networks in bipolar affective disorder. In: Brain informatics and health, SpringerGoogle Scholar Castelo JMB, Sherman SJ, Courtney MG, Melrose RJ, Stern CE (2006) Altered hippocampal-prefrontal activation in HIV patients during episodic memory encoding. Neurology 66(11):1688–1695View ArticleGoogle Scholar Chang CC, Lin CJ (2001) LIBSVM: a library for support vector machines. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm Dai W, Xue GR, Yang Q, Yu Y (2007) Co-clustering based classification for out-of-domain documents. In: ACM KDD, pp 210–219Google Scholar Luthert PJ, Lantos PL (1993) Neuronal number and volume alterations in the neocortex of hiv infected individuals. J Neurol Neurosurg Psychiatry 56(5):481–486View ArticleGoogle Scholar Gao C, Wang J (2010) Direct mining of discriminative patterns for classifying uncertain data. In: ACM KDD, pp 861–870Google Scholar Huan J, Wang W, Prins J (2003) Efficient mining of frequent subgraphs in the presence of isomorphism. In: IEEE ICDM, pp 549–552Google Scholar Inokuchi A, Washio T, Motoda H (2000) An apriori-based algorithm for mining frequent substructures from graph data. In: Principles of data mining and knowledge discovery. Springer, pp 13–23Google Scholar Jenkinson M, Pechaud M, Smith S (2005) BET2: MR-based estimation of brain, skull and scalp surfaces. In: Eleventh annual meeting of the organization for human brain mapping, p 17Google Scholar Jin N, Wang W (2011) LTS: Discriminative subgraph mining by learning from search history. In: IEEE ICDE, pp 207–218Google Scholar Jin N, Young C, Wang W (2009) Graph classification based on pattern co-occurrence. In: ACM CIKM, pp 573–582Google Scholar Jin N, Young C, Wang W (2010) GAIA: graph classification using evolutionary computation. In: ACM SIGMOD, pp 879–890Google Scholar Kong X, Ragin AB, Wang X, Yu PS (2013) Discriminative feature selection for uncertain graph classification. In: SIAM SDM, pp 82–93Google Scholar Kong X, Yu PS (2010) Multi-label feature selection for graph classification. In: IEEE ICDM, pp 274–283Google Scholar Kong X, Yu PS (2010) Semi-supervised feature selection for graph classification. In: ACM KDD, pp 793–802Google Scholar Kuramochi M, Karypis G (2001) Frequent subgraph discovery. In: IEEE ICDM, pp 313–320Google Scholar Langford TD, Letendre SL, Larrea GJ, Masliah E (2003) Changing patterns in the neuropathogenesis of hiv during the haart era. Brain Pathol 13(2):195–210View ArticleGoogle Scholar Mihalkova L, Huynh T, Mooney RJ (2007) Mapping and revising markov logic networks for transfer learning. In: AAAI, vol 7, pp 608–614Google Scholar Mihalkova L, Mooney RJ (2009) Transfer learning from minimal target data by mapping across relational domains. In: IJCAI, vol 9, pp 1163–1168Google Scholar Nijssen S, Kok JN (2004) A quickstart in frequent structure mining can make a difference. In: ACM KDD, pp 647–652Google Scholar Ragin AB, Du H, Ochs R, Wu Y, Sammet CL, Shoukry A, Epstein LG (2012) Structural brain alterations can be detected early in HIV infection. Neurology 79(24):2328–2334View ArticleGoogle Scholar Ranu S, Singh AK (2009) Graphsig: a scalable approach to mining significant subgraphs in large graph databases. In: IEEE ICDE, pp 844–855Google Scholar Shi X, Kong X, Yu PS (2012) Transfer significant subgraphs across graph databases. In: SIAM SDM, pp 552–563Google Scholar Smith SM (2002) Fast robust automated brain extraction. Hum Brain Mapp 17(3):143–155View ArticleGoogle Scholar Tang J, Hu X, Gao H, Liu H (2013) Unsupervised feature selection for multi-view data in social media. In: SIAM SDM, pp 270–278Google Scholar Tang W, Zhong S (2006) Pairwise constraints-guided dimensionality reduction. In: SDM workshop on feature selection for data miningGoogle Scholar Thoma M, Cheng H, Gretton A, Han J, Kriegel HP, Smola AJ, Song L, Yu PS, Yan X, Borgwardt KM (2009) Near-optimal supervised feature selection among frequent subgraphs. In: SIAM SDM, pp 1076–1087Google Scholar Tzourio-Mazoyer N, Landeau B, Papathanassiou D, Crivello F, Etard O, Delcroix N, Mazoyer B, Joliot M (2002) Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage 15(1):273–289View ArticleGoogle Scholar Wang X, Foryt P, Ochs R, Chung JH, Wu Y, Parrish T, Ragin AB (2011) Abnormalities in resting-state functional connectivity in early human immunodeficiency virus infection. Brain Connect 1(3):207–217View ArticleGoogle Scholar Wu J, Hong Z, Pan S, Zhu X, Cai Z, Zhang C (2014) Multi-graph-view learning for graph classification. In: IEEE ICDM, pp 590–599Google Scholar Yan X, Cheng H, Han J, Yu PS (2008) Mining significant graph patterns by leap search. In: ACM SIGMOD, pp 433–444Google Scholar Yan X, Han J (2002) gspan: graph-based substructure pattern mining. In: IEEE ICDM, pp 721–724Google Scholar Zhu Y, Yu JX, Cheng H, Qin L (2012) Graph classification: a diversified discriminative feature selection approach. In: ACM CIKM, pp 205–214Google Scholar
CommonCrawl
Capillary bridge formation between hexagonally ordered carbon nanorods Lukas Ludescher ORCID: orcid.org/0000-0002-8988-26061,2, Stephan Braxmeier3, Christian Balzer3, Gudrun Reichenauer3, Florian Putz4, Nicola Hüsing4, Gennady Y. Gor ORCID: orcid.org/0000-0001-7455-17782 & Oskar Paris1 Adsorption volume 26, pages 563–578 (2020)Cite this article Capillary condensation within the pore space formed by a hexagonal arrangement of carbon nanorods is investigated using a thermodynamic model. Numerical solution of the corresponding non-linear differential equations predicts two characteristic equilibrium phase transitions corresponding to liquid-bridge formation between adjacent rods, and the subsequent filling of the entire pore space with liquid adsorbate at higher relative pressure, respectively. These separate transitions are predicted for a wide range of porosities, as demonstrated for two non-polar fluids, nitrogen and n-pentane, employing experimentally determined reference isotherms to model the fluid–solid interactions. The theoretical predictions are compared to experimental data for nitrogen and n-pentane adsorption in an ordered mesoporous CMK-3 type material, with the necessary structural parameters obtained from small-angle X-ray scattering. Although the experimental adsorption isotherms do not unambiguously show two separate transitions due to a high degree of structural disorder of the mesopore space, their general trends are consistent with the theoretical predictions for both adsorbates. Avoid the common mistakes Experimental and theoretical studies of gas adsorption in mesoporous materials have been performed for many decades (Haul et al.1982). The emergence of novel material synthesis routes enabled the fabrication of monodisperse and highly ordered cylindrical pore systems such as MCM-41 (Kresge et al. 1992) or SBA-15 (Zhao et al. 1998) silica materials, allowing for the quantitative validation of theories of adsorption and capillary condensation based on both, macroscopic thermodynamics (Broekhoff 1967) and atomistic descriptions (Ravikovitch et al. 1998). In the early 2000s, a new family of templated mesoporous carbon materials has been developed by using ordered mesoporous silicas as template (Ryoo et al. 2001; Jun et al. 2000). A representative member of this family is CMK-3 carbon derived from negative templating of SBA-15 silica, exhibiting hexagonally ordered carbon nanorods with an intermediate pore space resembling the original silica template. Therefore, the mesopores in CMK-3 are qualitatively different from the cylindrical pores in SBA-15 forming a continuous pore space with the shortest distance between adjacent nanorods. Some consequences of this special pore geometry have been evident already in the very early publications on CMK-3, with the shapes of the nitrogen adsorption isotherm being quite different from those of the original silica templates, and some of them show two step-like ascends at significantly different relative pressures (Jun et al. 2000; Joo et al. 2002). However, the physical origin of these features has not been addressed in detail so far, and gas adsorption in CMK-3 was frequently modeled as if the pores were cylindrical (Jun et al. 2000; Joo et al. 2002). Density functional theory (DFT) kernels developed for calculation of pore size distribution in CMK-3 materials were also based on the cylindrical pore model (Gor et al. 2012). While this model gave quite satisfactory predictions for the mean pore sizes, some of the data showed a "bimodal" size distribution in the mesopore regime, pointing also towards two separate condensation events. Generally, there is a clear discrepancy in the shape of a theoretical isotherm based on the cylindrical pore model with a sharp condensation point, and the much smoother adsorption isotherm measured experimentally on CMK-3 carbons. To more accurately describe the adsorption isotherms on CMK-3 and alike materials, alternative models were developed, explicitly taking the rod-like geometry into account (Barrera et al. 2013; Jain et al. 2017; Yelpo et al. 2017). These alternative models were mainly based on molecular simulations using Grand Canonical Monte Carlo (GCMC). Barrera et al. (2013) followed the same principal idea as the DFT model by Gor et al. (2012) by representing the pore space of CMK-3 as a collection of cylindrical and slit pores. The interaction between the carbon nanorods and the nitrogen atoms was modeled by a Steele potential for micropores and an integrated, cylindrical Lennard–Jones potential for the mesopores. The resulting kernel was able to reproduce the results from Gor et al. (2012), thus justifying the applicability of the GCMC method. Yelpo et al. (2017) took this approach one step further and determined the carbon potential inside the pore space of CMK-3, creating a more realistic description for the GCMC simulation. Again, kernels were calculated and used to fit experimental data. Furthermore, a TEM image was analyzed and the pore size distribution obtained from TEM was compared to the pore size distribution obtained by their kernel. Jain et al. (2017) took a different route using an all-atom description of several CMK type systems. They represented the carbon rods by an assembly of individual carbon atoms and used a Lennard–Jones-type potential for the individual atoms to describe the interaction between the gas phase and the solid. Interconnections between the individual carbon rods were also introduced, to more accurately describe the CMK materials, which are seemingly necessary to get qualitatively sound results from the simulations. The adsorptive in this study was argon and the adsorption and desorption was modeled with GCMC for CMK-1, CMK-3 and CMK-5. The results from that study gave some insight into the role of imperfections in CMK materials, especially CMK-3. Although the methods for modeling adsorption based on molecular simulations have become state of the art, there is still a drawback compared to macroscopic theories: generalization of the results for a different adsorbate would require setting up a new time-consuming simulation. In this sense, macroscopic approaches for modeling adsorption in mesoporous materials, such as theories developed by Derjaguin (1992) and Broekhoff and de Boer (1967) (DBdB theory), or by Saam and Cole (1975) are advantageous. While requiring only a few parameters and being not computationally intense, they reliably predict the adsorption isotherms for cylindrical pores not only for simple gases such as nitrogen or argon (Neimark and Ravikovitch 2001), but also for more complex molecules, such as water, methanol, toluene (Lépinay et al. 2015), pentane (Gor et al. 2013) or perfluoropentane (Hofmann et al. 2016). Attempts to describe the adsorption and capillary condensation for geometries other than simple slit- or cylindrical pores were carried out by Philip (1977b) already in the 1970s for two parallel cylinders. This idea was further developed by Morishige and Nakahara (2008) into a comprehensive theoretical framework for the transition from a liquid film phase to a "bridged" phase, effectively spanning the void space between the two adjacent cylinders. Dobbs and Yeomans (1993) extended the approach of Philip by numerically minimizing the grand potential of different configurations of liquid in the open pore space between cylindrical rods located on a square lattice. However, that paper was published several years before the emergence of CMK-3, and to the best of our knowledge its predictions were never adapted to the hexagonal geometry and neither have they been compared to real experimental data. Here we solve the problem of the hexagonal geometry of CMK-3 adapting the thermodynamic model from Ref. (Dobbs and Yeomans 1993). We calculate explicit solutions of the corresponding non-linear differential equations for two different adsorbates, namely nitrogen and n-pentane, by using appropriate reference isotherms to model the solid–fluid interaction (Gor and Neimark 2010, 2011). Equilibrium phase diagrams separating a "film phase" (liquid film on the carbon nanorods), a "bridged phase" (liquid bridges between adjacent nanorods) and a "filled" phase (entire liquid filled pore space) are obtained. Calculations with a simplified, analytical model of the "bridged phase" are performed to elucidate numerical results and provide comparison to traditional characterization approaches such as the Kelvin-Cohan (Neimark et al. 2003) equation. Predictions from the numerical results are compared with experimentally measured adsorption isotherms from CMK-3-like carbon materials (Koczwara et al. 2017) using nitrogen at 77 K and n-pentane at 290 K. Theoretical model For the thermodynamic description of adsorption in CMK-3-like materials we employ the first variation of the grand potential of three competing unique distributions of liquid-like phases in the open mesopore space (see Fig. 1). "Separated" phase: an adsorbed layer (liquid-like film) is present on each individual rod, but the films are not connected with each other. "Bridged" phase: a liquid bridge exists between neighboring rods, with a void space remaining between the three rods. "Filled" phase: The entire space between the carbon nanorods is filled with liquid. Sketch of the top-view pore-space geometry of the hexagonally arranged cylindrical carbon nanorods (grey) with radius r and distance D. Dark blue areas indicate the "separated" phase (liquid adsorbed film around each cylinder), while liquid bridges between the rods are labeled by light blue. The image at the bottom shows an enlarged detail including the two cylindrical coordinate systems used. For the "separated" phase the origin is set into the center of the rod (black system) and is denoted with the subscript 1, while for the "bridged phase" the origin is set into the center of the triangle (red) with the subscript 2 in the respective equations (Color figure online) All derivations are based on the assumption that the aspect ratio between the diameter of the rods and their length is small, meaning that we can restrict the description to the plane perpendicular to the rod axis. Because of the radial symmetry of the rods we use cylindrical coordinates. To simplify calculations, we consider two different coordinate systems, with the "separated" phase having its origin in the center of the rods, while for the "bridged" phase the origin is located in the center of the unit cell set up by the three rods (see Fig. 1). To model fluid adsorption in this system, the grand potential per unit length of the rod \(\Omega \) is treated as a functional \(\Omega \left(l\right)\) of the liquid profile, where l is the radial coordinate. We assume uniform density of the liquid, meaning that \(\Omega \) will depend on geometry only. This results in $$ \Omega = \int_{{\theta _{a} }}^{{\theta _{b} }} {f\left( {\theta ,l,l_{\theta } } \right)d\theta } $$ with θa and θb being the limits of the angular interval, l being the distance of the vapor–liquid interface from the origin as a function of θ, and lθ being the derivative of l with respect to θ. Governing equations for the phases Adapting the equation for the grand potential of the "separated" phase from (Dobbs and Yeomans 1993) to the hexagonal geometry, we have $$ \Omega \left( {l_{1} } \right) = 6\int_{0}^{{\frac{\pi }{6}}} {d\theta _{1} \left( {\gamma \left( {l_{1}^{2} + l_{{\theta _{1} }}^{2} } \right)^{{\frac{1}{2}}} } \right)} + \Delta \tilde{\mu }\left( {6\int_{0}^{{\frac{\pi }{6}}} {d\theta _{1} \frac{1}{2}l_{1} \left( {\theta _{1} } \right)^{2} - \frac{\pi }{2}r^{2} } } \right) + 6\int_{0}^{{\frac{\pi }{6}}} {d\theta _{1} V\left( {l_{1} ,\theta _{1} } \right)}. $$ The subscript 1 denotes the "separated" phase with l1 ≡ l(θ1) and \({l}_{{\theta }_{1}}\)≡ dl1/dθ1. The first part of the equation describes the contribution of the liquid–vapor interfacial energy γ, the second term describes the influence of the chemical potential \(\Delta \tilde{\mu }\) of the liquid, and the last term describes the energy due to the solid–liquid film potential \(V\left(l,\theta \right)\). The chemical potential per unit volume \(\Delta \tilde{\mu }\) is: $$ {\Delta }\tilde{\mu } = \frac{{R_{g} T}}{{v_{l} }}\ln \left( {\frac{p}{{p_{0} }}} \right) $$ where vl is the molar volume of the liquid, Rg is the universal gas constant, T is the absolute temperature and \(p/{p}_{0}\) is the relative pressure. The last term in Eq. 2 represents the integrated effect of the solid–fluid interactions, which can be related to Derjaguin's disjoining pressure Π $$ V\left( {l,\theta } \right) = \int\limits_{{l\left( \theta \right)}}^{{l_{{max}} }} {\Pi \left( {l,\theta } \right)l dl} $$ where lmax is the maximum value of the profile to be considered, which helps keeping the integration limited to a unit cell in ordered systems. The detailed discussion of Π(l) is given in Sect. 2.2. Application of the Euler–Lagrange equation to Eq. 2 leads to a second order, non-linear, ordinary differential equation, which minimized the grand potential in any radially symmetric case: $$ \gamma \frac{d}{{d\theta _{1} }}\left( {\frac{{l_{{\theta _{1} }} }}{{\left( {l_{1}^{2} + l_{{\theta _{1} }}^{2} } \right)^{{\frac{1}{2}}} }}} \right) - \gamma \frac{{l_{1} }}{{\left( {l_{1}^{2} + l_{{\theta _{1} }}^{2} } \right)^{{\frac{1}{2}}} }} + l_{1} \left( {\Delta \tilde{\mu } - \Pi \left( {\theta _{1} ,l_{1} } \right)} \right) = 0. $$ Noteworthy, for cylindrical pores Eq. 5 reduces to Derjaguin's equation (Broekhoff 1967; Derjaguin 1992). For the "bridged" phase we set the origin of the coordinate system in the center of the rod (Philip 1977b; Dobbs and Yeomans 1993; Gatica et al. 2002) as depicted in Fig. 1 with subscript 2 (l2 ≡ l(θ2)) and Eq. 2 becomes $$ \Omega \left( {l_{2} } \right) = 6\int\limits_{0}^{{\frac{\pi }{3}}} {d\theta _{2} \left( {\gamma \left( {l_{2}^{2} + l_{{\theta _{2} }}^{2} } \right)^{{\frac{1}{2}}} } \right)} + \Delta \tilde{\mu }\left( {{\sqrt 3 }R^{2} - \frac{\pi }{2}r^{2} - 6\int\limits_{0}^{{\frac{\pi }{3}}} {d\theta _{2} \frac{1}{2}l_{2} \left( {\theta _{2} } \right)^{2} } } \right) + 6\int\limits_{0}^{{\frac{\pi }{3}}} {d\theta _{2} V\left( {l_{2} ,\theta _{2} } \right)}. $$ Here, R = D/2 is the half-distance between the centers of the adjacent rods and r is the radius of the rods (see Fig. 1), and we change the integration limit from \(\frac{\pi }{6}\) to \(\frac{\pi }{3}\) because of the change in origin from the center of a rod to the center of the interstitial void space. Minimizing the functional we obtain again a second order, non-linear ordinary differential equation $$ \gamma \frac{d}{{d\theta _{2} }}\left( {\frac{{l_{{\theta _{2} }} }}{{\left( {l_{2}^{2} + l_{{\theta _{2} }}^{2} } \right)^{{\frac{1}{2}}} }}} \right) - \gamma \frac{{l_{2} }}{{\left( {l_{2}^{2} + l_{{\theta _{2} }}^{2} } \right)^{{\frac{1}{2}}} }} - l_{2} \left( {\Delta \tilde{\mu } - \Pi \left( {\theta _{2} ,l_{2} } \right)} \right) = 0. $$ In the case of the completely filled pore space ("filled" phase) the sole contribution to the grand potential is the liquid inside the filled pore space $$ {\Omega } = \Delta\tilde{\mu } \left( {\sqrt 3 R^{2} - \frac{\pi }{2}r^{2} } \right). $$ We note that the values of the grand potential obtained in this study are not absolute but are offset by a constant contribution (containing the surface energy of the solid). The discussion of these terms is given elsewhere (Gor and Neimark 2010). While these terms do not affect the transition points, they contribute to the solvation pressure in the pore and affect the thermodynamic properties of the fluid (Hill 1952). To solve the differential equations Eqs. 5 and 7, two values of the film thickness or the slope at the boundaries of the unit cell have to be known. The boundary conditions for the "separated" phase, with the origin fixed at the center of the rod, are given by: $$ \left. {l_{{\theta_{1} }} } \right|_{{\theta_{1} = 0}} = 0 \,and\,\left. {l_{{\theta_{1} }} } \right|_{{\theta_{1} = \frac{\pi }{6}}} = 0 $$ Similarly, for the "bridged" phase the conditions read: $$ \left. {l_{{\theta_{2} }} } \right|_{{\theta_{2} = 0}} = 0 \, and \,\left. {l_{{\theta_{2} }} } \right|_{{\theta_{2} = \frac{\pi }{3}}} = 0 $$ Using these boundary conditions, the two-point Neumann boundary value problem was solved numerically, as outlined in Appendix A. Determination of the solid–fluid interaction potential The key term in the macroscopic theories of adsorption and capillary condensation is the term related to the solid–fluid interaction potential. Derjaguin used the concept of disjoining pressure Π(h) to represent it, where h is the film thickness, which can more generally be defined as the shortest distance between the substrate surface and the liquid–vapor interface. Disjoining pressure isotherms for adsorption of fluids on a flat surface are often modeled via the Frenkel-Halsey-Hill equation (Halsey 1948; Hill 1952): $$ {\Pi }\left( h \right) = - \frac{{R_{g} T}}{{v_{l} }}\frac{k}{{\left( {\frac{h}{{h_{0} }}} \right)^{m} }} $$ where h0 = 0.1 nm and k and m are the two free parameters of the model. While the DBdB theory (Broekhoff 1967; Derjaguin 1992) typically neglects the curvature of the pores and uses Eq. 11 directly to represent the solid–fluid interactions in the cylindrical pore, we take here the curvature of the carbon nanorods into account. We use the integrated solid–fluid potential for an infinite rod derived for arbitrary inverse power-law potentials (Philip 1977a): $$ {\Pi }\left( h \right) = - \frac{{\pi^{\frac{3}{2}} {\Gamma }\left( {\frac{ \epsilon - 1}{2}} \right)}}{{{\Gamma }\left( {\frac{ \epsilon }{2}} \right)}} \alpha r^{2} \left( {h + r} \right)^{1 - \epsilon} F_{2;1} \left( {\frac{ \epsilon - 1}{2};\frac{ \epsilon - 1}{2};2;\left( {\frac{r}{h + r}} \right)^{2} } \right) $$ where Π is the film potential, α is the interaction parameter, ε is the exponent of the inverse-power-law, \(\Gamma \) denotes the Eulerian gamma function, and F2;1 is the generalized hypergeometric function. The parameters α and ε are determined by fitting from reference isotherms, assuming infinite, planar substrates in any case. Consequently, one can think of this as an empirical function, with the units of α depending on the value of ε. These parameters can be readily related to the parameters of the Frenkel-Halsey-Hill equation (Eq. 11): $$ m = \epsilon{-}3 $$ $$ R_{g} Tkh_{0}^{m} = \frac{2\pi \alpha }{{\left( { \epsilon - 3} \right)\left( { \epsilon - 2} \right)}} $$ Finally, we take into account that the adsorbing fluid interacts not with a single rod, but with the three rods of the unit cell and sum up the potentials at each single point of consideration, similar to the quadratic lattice considered in (Dobbs and Yeomans 1993). Computational results Reference isotherms and disjoining pressure Theoretical predictions of adsorption isotherms require the knowledge of the disjoining pressure isotherm Π(h). In order to be able to compare our numerical results with experimental data, we used available experimental reference isotherms from nitrogen and n-pentane adsorption on carbon (Fig. 2). For nitrogen adsorption at 77 K we used the literature data of activated carbon annealed at high temperature (2000 °C) for 2.5 h (Silvestre-Albero et al. 2014). For n-pentane, no literature data were available. Therefore, we employed own data from n-pentane adsorption on a carbon xerogel thermally annealed at 1800 °C for 50 min. This sample contained a negligibly small amount of micropores, and mesopores of some tens of nanometers in size (Balzer 2018). Ideally, the reference isotherm should have been measured on a non-porous or at least a macroporous only sample, but no such sample was available to perform n-pentane adsorption measurements. The presence of large mesopores is probably the reason why the fit in Fig. 2b deviates from the data at relative pressures above 0.6. Nevertheless, the first layers are properly described by the Frenkel-Halsey-Hill (FHH) isotherm and should therefore approximate the interaction of the fluid molecules with the carbon substrate with sufficient accuracy. The interaction parameters derived from the fits of Eqs. 11, 13 and 14 to the corresponding reference isotherms are presented in Table 1. Plot of the film thickness h of the reference isotherms for nitrogen on annealed activated carbon [data from (Silvestre-Albero et al. 2014)] (a), and n-pentane on a carbon xerogel (b). The experimental reference isotherms are shown by (black) squares, the FHH-fits with solid (red) lines (Color figure online) Table 1 Fitting parameters of the FHH isotherm (see Eq. 11) to the reference isotherms We note the significance of the Cole-Saam approach (Saam and Cole 1975) for our work, which takes the curvature of the solid surface explicitly into account. In cylindrical mesopores, where capillary condensation usually happens at relatively low film thickness (only few monolayers of adsorbate), the effect of curvature is small. In our model, however, we need to consider the film potential at distances of several nanometers when solving the equations for the "bridged" phase. Figure 3 compares the disjoining pressure as a function of film thickness h for a flat surface and a single carbon cylinder with a radius of 3.7 nm. At low film thicknesses the overall difference between both potentials is small, but at distances above 1 nm, in case of the flat surface the overall potential is overestimated by 20 to 50 percent. Disjoining pressure calculated for a flat surface and for the surface of a cylinder with radius r = 3.7 nm for n-pentane on carbon (Balzer 2018). The inset shows the relative difference of the data for the flat surface in regard to the cylindrical potential as a function of the distance to the substrate Calculated isotherms and phase diagrams To predict adsorption isotherms for the geometry outlined in Fig. 1, the liquid–gas interface profiles for the "separated" and "bridged" phases were calculated by solving Eqs. 5 and 7, respectively. According to the available experimental data (see Sect. 4 and Appendix B), the rod radius r was varied between 3.3 and 4.4 nm in 0.05 nm steps, while the rod distance D was fixed at a value of 10.1 nm. Temperatures used in the simulations were 77.4 K and 290 K for nitrogen and n-pentane respectively, according to the temperatures at which the adsorption measurements were carried out. Experimental details are outlined in Appendix B. The differential equations were solved numerically by applying a finite difference scheme using a custom written code, outlined in some detail in Appendix A. Solutions were obtained for 20 equidistant relative pressure values ranging from 0.01 to 0.95 for both sets of interaction parameters. From the interface profiles l(θ) the grand potentials were calculated using Eqs. 2 and 6 for the "separated" and "bridged" phases, respectively, and Eq. 8 was employed for the "filled" phase. The thermodynamically stable phase at a chosen pressure is now simply given by the lowest value of Ω, with the exact relative pressure values at which transitions between the phases occur obtained via interpolation and root finding. A selection of profiles of the "bridged" phase for \(D/r =2.7\) are shown in Fig. 4a. With these profiles, the grand potential Ω of the "bridged" phase can be calculated (Eq. 6) and compared to the grand potentials of the "separated" and "filled" phases as shown in Fig. 4b. As can be seen in Fig. 4b, the "separated" phase is stable for low relative pressures, followed by the "bridged" phase and finally the "filled" phase. Figure 4c displays the corresponding nitrogen adsorption isotherm for a ratio D/r = 2.7, calculated from the "separated" and "bridged" profiles. The circles in Fig. 4c correspond to the film profiles for the "bridged" phase shown in Fig. 4a. With increasing relative pressure, the void space shrinks and changes its shape from rather triangular towards more circular, upon which the "bridged-to-filled" transition happens. a 2D interface profiles for the "bridged" phase. With increasing relative pressure, the overall size of the void space decreases, while the shape of the void space changes from more triangular towards more circular. The four relative pressures at which the profiles in (a) are shown are indicated by blue circles in panel (c). b The grand potential as a function of relative pressure for the three different phases (blue: "separated" phase; green: "bridged" phase; red: "filled" phase) for nitrogen on carbon and D/r = 2.7. c Corresponding adsorption isotherm (Color figure online) From the isotherms determined for different D/r ratios we can extract phase diagrams, showing the stability range of the respective phases as a function of relative pressure and D/r ratio. This ratio can be related to the maximum inscribed radius \({r}_{u}^{*}\) between the three cylindrical rods $$ r_{u}^{*} = \frac{\sqrt 3 }{3} D - r $$ or in a dimensionless representation $$ r^{\prime}_{u} = \frac{{r_{u}^{*} }}{r} = \frac{\sqrt 3 }{3}\frac{D}{r} - 1 $$ linking the reduced sizes to the pore geometry considered in earlier work (Ryoo et al. 2001). In Figs. 5a and 6a we show the resulting phase diagrams for nitrogen and n-pentane, respectively. For all calculations we used a fixed nanorod distance D = 10.1 nm, which is the mean value determined experimentally for the sample discussed (see Appendix B). Variation of D in the calculations was also considered, but its influence was found to be minor as compared to the impact of the nanorod radius r. The phase diagrams for the two fluids show a similar overall trend. Both adsorbates show a bridging transition at very low relative pressures for the smallest D/r ratio, which corresponds to a small distance of the rods of 1.3 nm only. With increasing D/r, the "separated" to "bridged" phase transition appears at increasingly larger relative pressures. While the "bridging" transition is strongly dependent on the nanorod radius, the "bridged" to "filled" transition appears at very similar relative pressures for all investigated radii (i.e. the corresponding phase boundary is almost vertical). The location of the phase boundaries are clearly different for the two investigated fluids, reflecting the different fluid–solid interactions. In particular, for a D/r ratio > 3.1 (corresponding to a mesoporosity > 62%), there is no more "bridged" phase for nitrogen, while for n-pentane there is still a quite broad stability range for the "bridged" phase. a Calculated phase diagram for nitrogen in CMK-3-like carbon at 77 K showing the "separated" phase (green), the "bridged" phase (white), and the "filled" phase (red). b Nitrogen (77 K) adsorption isotherm of hierarchically porous CMK-3-like carbon (black), and the derivative of the isotherm (grey). Vertical lines are drawn at the relative pressure of the two local maxima of the derivative indicating the pressure of the "separated-to-bridged" and the "bridged-to-filled" phase transitions in this sample. The grey vertical regions represent the uncertainty of the maximum derived from the experimental isotherm. The horizontal line in panel a indicates the ratio D/r of 2.6 measured with SAXS for the present sample, with the grey horizontal region representing the uncertainty of the experimentally determined D/r ratio (Color figure online) a Calculated phase diagram for n-pentane in CMK-3-like carbon at 290 K showing the "separated" phase (green), the "bridged" phase (white), and the"filled" phase (red). b n-pentane (290 K) adsorption isotherm in hierarchically porous CMK-3-like carbon (black), and the derivative of this curve (grey). Vertical lines are drawn at the relative pressure of two maxima of the derivative, indicating the pressure of the "separated-to-bridged" phase transition and the "bridged-to-filled" phase transition in this sample. The grey region represents the uncertainty of the maximum derived from the experimental isotherm. The horizontal line in a indicates the ratio D/r of 2.6 measured with SAXS for the present sample, with the grey horizontal region representing the uncertainty in the experimentally determined D/r ratio (Color figure online) Comparison with experiment Despite of some experimental hints towards a double condensation transition already in the very early papers on CMK-3 carbons (Jun et al. 2000; Joo et al. 2002), and the corresponding appearance of a "bimodal" mesopore diameter distribution (Gor et al. 2012), the idea of a "film-to-bridged" phase transition without accompanying complete pore filling was not confirmed experimentally so far. The reason might be that the two steps in the adsorption isotherms are not, or at least not unambiguously, seen in many experimental data sets of CMK-3. For instance, in ref. (Gor et al. 2012), one sample showed only a very slight indication of a second step, while for the other sample this second step was clearly visible. This is probably a consequence of a quite large amount of disorder in the arrangement and a large surface roughness of the nanorods, which may smear out such transitions. Here we provide some experimental evidence that the experimentally observed double steps may indeed be related to the two transitions predicted by the thermodynamic model in the previous section. We have chosen a carbon sample with hierarchical porosity, synthesized via nanocasting into a hierarchical silica sample with SBA-15 type cylindrical mesopores. The resulting carbon sample exhibits a micro-/meso-/macroporous structure with the hexagonally ordered cylindrical carbon nanorods forming a CMK-3-like pore geometry (Koczwara et al. 2017). The structural parameters characterizing the mesopore space, D = 10.1 nm and r = 3.9 nm obtained from SAXS (see Appendix B), correspond to a mesoporosity of 46% and a ratio D/r = 2.6. Figures 5b and 6b show the experimentally determined adsorption isotherms of the sample for nitrogen at 77 K and n-pentane at 290 K, respectively (see Appendix B for experimental details). The isotherms show a rapid increase at very low pressures, which is attributed to the filling of micropores within the carbon nanorods. Besides this micropore filling, two shoulders with inflection points at p/p0 ≈ 0.35 and p/p0 ≈ 0.7 for nitrogen, and at p/p0 ≈ 0.20 and p/p0 ≈ 0.6 for n-pentane are clearly recognized. This non-monotonic behavior becomes even clearer when considering the first derivative of the adsorption isotherm (grey curve in Figs. 5b and 6b). Thus, the hierarchical porous carbon material seemingly exhibits two distinct condensation events in adsorption for both nitrogen and n-pentane, but only a single one in desorption (shown in Appendix B). This is consistent with the study by Gor et al. (2012), where high resolution nitrogen adsorption isotherms recorded for CMK-3 samples showed two step-like features in adsorption, but only a single evaporation event for desorption. The data are also qualitatively in agreement with the very early papers on CMK-3 (Jun et al. 2000; Joo et al. 2002), but unfortunately all those data sets did not provide values for the nanorod radius nor their distance, which prevented to include them into our analysis. We mention that although the hierarchical sample investigated here exhibits also macropores, their size range (micrometers) is not compatible with a condensation event at p/p0 ≈ 0.7 for nitrogen. Vertical dashed lines in Figs. 5 and 6 denote the experimental transition pressures given by the maxima of the derivative, and grey intervals visualize the uncertainty range. It is seen that the second transition ("bridged-to-filled") agrees with the calculated phase transition pressure for both fluids within the experimental error. In fact, this transition is not very sensitive to the D/r ratio, as the phase boundary is almost vertical in Figs. 5a and 6a. We note that this transition around p/p0 ≈ 0.7 also agrees quite well with the second hump in the isotherm shown in Ref. (Gor et al. 2012). For the "separated-to-bridged" phase, the experimental transition lines cross the phase boundary at D/r ≈ 2.65 for nitrogen and at D/r ≈ 2.7 for n-pentane, respectively. The horizontal line drawn in Figs. 5a and 6a indicates the D/r value of 2.6 measured experimentally with small-angle X-ray scattering (SAXS). The agreement between calculated phase diagram and the experimental result for this given D/r ratio appears to be excellent within the experimental errors. The thermodynamic model of a bridging transition between two cylindrical rods was developed by Philip (1977b) and extended to a quadratic array of four cylindrical rods by Dobbs and Yeomans (1993). The latter work allowed predicting three different thermodynamic phases during physical adsorption of fluids in such a system, i.e., a "separated" phase (liquid-like adsorbed film on the cylinders), a "bridged" phase (liquid bridges between the cylinders), and a "filled phase", where the entire space between the cylinders is filled with liquid. The transitions between these phases are first-order and should be observable in experiments by discontinuous, step-like events in the adsorption isotherms at specific relative vapor pressures. Unfortunately, the geometry of four cylinders on a square lattice proposed in (Dobbs and Yeomans 1993) is not realized experimentally for mesoporous materials, and also the hexagonal rod arrangement realized experimentally with CMK-3 came up only several years after Ref. (Dobbs and Yeomans 1993) was published. This is probably the reason why this elegant thermodynamic treatment of a complex, but still solvable pore space geometry has not found further attention so far. On a side note we mention that although such continuum approaches often exhibit deviations from microscopic simulations particularly at small pore sizes below 5 nm, general trends and at least qualitative agreement is still to be expected (Ravikovitch and Neimark 2000). In the present work we have reformulated the theory of Dobbs and Yeomans for a hexagonal lattice, and we have calculated theoretical phase diagrams for the adsorption of nitrogen and n-pentane on CMK-3 carbon by deriving the fluid–solid interactions from respective reference isotherms. The relative pressures of the predicted phase transitions are qualitatively consistent with condensation events between two neighboring rods and in the space between three hexagonally arranged rods, respectively, employing the Kelvin-Cohan equation (Appendix C). The predictions of the model were then compared with experimental data from a CMK-3 type sample. Unfortunately, the experimental adsorption isotherms did not unambiguously show two step-like features related to the two transitions. Yet, the shape of the isotherms clearly reveals two "discontinuities" at relative pressures consistent with predictions from the model (Figs. 5 and 6). This agrees with earlier work from other authors on CMK-3 (Gor et al. 2012; Jun et al. 2000; Ryoo et al. 2001), where a second "shoulder" was clearly observed, although this feature was not discussed in these papers. Gor et al. (2012) used N2 isotherms on CMK-3 to derive a pore size distribution from QSDFT, and observed a second class of mesopore sizes which would be consistent with a second condensation event. However, the experimentally observed step height of the transitions does not agree with the theoretical predictions. The experimentally observed second step of the "bridged-to-filled" transition (Fig. 5b) is much smaller than the one predicted (Fig. 4b). There are two possible reasons for this deviation: First, the mean-field theory used here predicts a more abrupt transition than a theory which would consider the density variation in the condensed phase, such as based on molecular simulations. Second, we may attribute this deviation also to the disorder in the system (see Fig. 7a). There is indeed strong evidence in literature that the nanorods exhibit a strongly corrugated surface with carbon cross-bridges between them (Solovyov et al. 2002). As sketched in Fig. 7b, such cross bridges might be local condensation points, and the geometry of the "bridged" phase might be realized only locally over a restricted volume (Fig. 7c). Due to the disorder of the carbon nanowires, specific locations in the sample may exhibit small interstitial spaces in-between three 2D-hexagonally ordered carbon nanowires, which fill already at lower pressures, thus the "separated-to-bridged" transition and the "bridged-to-filled" transition not being separately resolved, as sketched in Fig. 7d, I and II. This would naturally lead to larger filling fractions for the "separated-to-bridged" transition, with a broad transition-pressure regime, as observed experimentally. As a consequence, only a considerably smaller pore volume fraction than predicted by our model (Fig. 4c) would contribute to the "bridged-to-filled" transition. We note that the structural parameters (\(D\) and \(r\)) from SAXS (see Fig. 8) are somewhat ambiguous, as they are related to the highly ordered part of the pore space. It has been shown already for SBA-15 silica (Jähnert et al. 2009; Findenegg et al. 2010) that there may be a considerable amount of "disordered porosity", which we expect to be even higher for CMK-3 due to the additional synthesis step using SBA-15 as a template. We speculate that as long as the distance of mutual contact points between neighboring cylinders along the cylinder axis is clearly larger than the distance between the cylindrical rods, the geometry sketched in Fig. 1 would exist at least locally (see Fig. 7d, I), enabling in principle the proposed transitions. We are however fully aware of the fact that our experimental model system is much more complicated than the theoretical model. Consequently, the observed agreement between the calculated and the measured phase transition pressures is a strong indication, but no final proof for the existence of such capillary bridges between nanorods in CMK-3. A final proof would require the availability of samples with much higher structural order, which to our knowledge are not available so far. Sketch of a "realistic" 3D model of the carbon nanorods (a), their mean distance and radius corresponding to the values obtained from SAXS. b and c show sketches of vertical cuts for pressures below (b), and above (c) the "separated-to-bridged" transition, with the liquid adsorbate shown in opaque green. The liquid-like film covering the nanorods is omitted in c for better visualization, and only condensed regions in small "constrictions" are shown, which will act as nucleation sites for the "bridged" phase. In d, two top-view sketches at cuts through the positions I and II in c are shown, demonstrating the local existence of the "bridged" and the "filled" phases in different regions along the rod axis Although kernels of isotherms from molecular simulations provide satisfactory fits to experimental data from CMK-3 carbons, none of them discusses the existence of liquid bridges spanning the shortest distance between neighboring nanorods (Yelpo et al. 2017; Barrera et al. 2013; Jain et al. 2017). The adsorption isotherms in these models are usually derived assuming the existence of a spinodal, which is not necessarily the case in highly conjugated pore spaces (Gommes and Roberts 2018). Our purely thermodynamic equilibrium approach was able to quite accurately predict the experimentally observed transition pressures. This implies that nucleation events in the void space between the nanorods must help overcoming the activation barrier right at the pressure of equilibrium, which consequently means that a spinodal transition is not present. Jain et al. (2017) performed simulations of argon adsorption on CMK-3 including interconnections between the individual nanorods. They could clearly show the influence of such irregularities on the general shape of the adsorption isotherm. The deliberately introduced interconnections basically shifted the relative pressure for condensation to significantly lower values, meaning that smaller structures within the void space could very well serve as nucleation sites. Hence, the presence of disorder and possible carbon interconnects in CMK-3 might be even the key to the formation of liquid bridges between neighboring rods by providing the nucleation sites for the "bridged" phase. The fact that no spinodal transition is needed to determine the relative pressures of condensation sheds light on the underlying physical processes and their relation to the actual structures present in the material. In conclusion, we presented a comparison between computational and experimental results on the structural characterization of a monolithic CMK-3-like material using nitrogen and n-pentane adsorption and small angle X-ray scattering. Following an earlier theoretical approach (Dobbs and Yeomans 1993), three different phases of the fluid in the pore space ("separated", "bridged" and "filled" phases) are proposed, and their grand potentials are minimized by the geometric arrangement of liquid inside the open pore space in the monolithic CMK-3-like material. The theoretical predictions for the adsorption isotherms of nitrogen and n-pentane on the carbon material for varying nanorod radii but constant nanorod distances were used to construct phase diagrams, linking the "separated-to-bridged" and "bridged-to-filled" phase transitions to experimental adsorption data and structural data from small angle X-ray scattering. For both, nitrogen and n-pentane adsorption, fair agreement between the theoretical predictions and experimental results is found, indicating that this model is able to qualitatively describe the physical processes governing adsorption in the open pore space of CMK-3 like materials. The resulting mean pore size is in good agreement with earlier work using state-of-the-art methods (Gor et al. 2012). Balzer, C.: Adsorption-induced deformation of nanoporous materials—in-situ dilatometry and modeling. PhD thesis, Universität Würzburg (2018) Balzer, C., Wildhage, T., Braxmeier, S., Reichenauer, G., Olivier, J.P.: Deformation of porous carbons upon adsorption. Langmuir 27(6), 2553–2560 (2011). https://doi.org/10.1021/la104469u Barrera, D., Dávila, M., Cornette, V., de Oliveira, A.J.C., López, R.H., Sapag, K.: Pore size distribution of ordered nanostructured carbon CMK-3 by means of experimental techniques and Monte Carlo simulations. Microporous Mesoporous Mater 180, 71–78 (2013). https://doi.org/10.1016/j.micromeso.2013.06.028 de Boer, J.H., Lippens, B.C., Linsen, B.G., Broekhoff, J.C.P., van den Heuvel, A., Osinga, ThJ: The t-curve of multimolecular N2-adsorption. J. Colloid Interface Sci. 21(4), 405–414 (1966). https://doi.org/10.1016/0095-8522(66)90006-7 Brandhuber, D., Torma, V., Raab, C., Peterlik, H., Kulak, A., Hüsing, N.: Glycol-modified silanes in the synthesis of mesoscopically organized silica monoliths with hierarchical porosity. Chem. Mater. 17(16), 4262–4271 (2005). https://doi.org/10.1021/cm048483j Broekhoff, J.: Studies on pore systems in catalysts IX. Calculation of pore distributions from the adsorption branch of nitrogen sorption isotherms in the case of open cylindrical pores A. Fundamental equations. J. Catal. 9(1), 8–14 (1967). https://doi.org/10.1016/0021-9517(67)90174-1 Cohan, L.H.: Sorption hysteresis and the vapor pressure of concave surfaces. J. Am. Chem. Soc. 60(2), 433–435 (1938). https://doi.org/10.1021/ja01269a058 Croucher, M.D., Hair, M.L.: Hamaker constants and the principle of corresponding states. J. Chem. Phys. 81(17), 1631–1636 (1977). https://doi.org/10.1021/j100532a006 Derjaguin, B.: A theory of capillary condensation in the pores of sorbents and of other capillary phenomena taking into account the disjoining action of polymolecular liquid films. Prog. Surf. Sci. 40(1–4), 46–61 (1992). https://doi.org/10.1016/0079-6816(92)90032-D Dobbs, H.T., Yeomans, J.M.: Capillary condensation within an array of cylinders. Mol. Phys. 80(4), 877–884 (1993). https://doi.org/10.1080/00268979300102731 Findenegg, G.H., Jähnert, S., Müter, D., Paris, O.: Analysis of pore structure and gas adsorption in periodic mesoporous solids by in situ small-angle X-ray scattering. Coll. Surf. A 357(1–3), 3–10 (2010). https://doi.org/10.1016/j.colsurfa.2009.09.053 Gatica, S.M., Calbi, M.M., Cole, M.W.: Simple model of capillary condensation in porous media. Phys. Rev. E 65, 61605 (2002). https://doi.org/10.1103/PhysRevE.65.061605 Gommes, C.J., Roberts, A.P.: Stochastic analysis of capillary condensation in disordered mesopores. Phys. Chem. Chem. Phys. 20, 13646–13659 (2018). https://doi.org/10.1039/C8CP01628C Gor, G.Y., Neimark, A.V.: Adsorption-induced deformation of mesoporous solids. Langmuir 26, 13021–13027 (2010). https://doi.org/10.1021/la1019247 Gor, G.Y., Neimark, A.V.: Adsorption-induced deformation of mesoporous solids: macroscopic approach and density functional theory. Langmuir 27, 6926–6931 (2011). https://doi.org/10.1021/la201271p Gor, G.Y., Paris, O., Prass, J., Russo, P.A., Ribeiro Carrott, M.M., Neimark, A.V.: Adsorption of n-pentane on mesoporous silica and adsorbent deformation. Langmuir 29, 8601–8608 (2013). https://doi.org/10.1021/la401513n Gor, G.Y., Thommes, M., Cychosz, K.A., Neimark, A.V.: Quenched solid density functional theory method for characterization of mesoporous carbons by nitrogen adsorption. Carbon 50, 583–1590 (2012). https://doi.org/10.1016/j.carbon.2011.11.037 Halsey, G.: Physical adsorption on non-uniform surfaces. J. Am. Chem. Soc. 16, 931–937 (1948). https://doi.org/10.1063/1.1746689 Haul, R., Gregg, S. J., Sing, K.S.: Adsorption, surface area and porosity, 2nd edn. Academic Press, London (1982) Hill, T.L.: Theory of physical adsorption. In: Frankenburg, W.G., Komarewsky, V.I., Rideal, E.K. (eds.) Advances in catalysis, vol. 4, pp. 211–258. Elsevier, Amsterdam (1952) Hofmann, T., Wallacher, D., Perlich, J., Sarathlal, K.V., Huber, P.: Formation of periodically arranged nanobubbles in mesopores: capillary bridge formation and cavitation during sorption and solidification in an hierarchical porous SBA-15 matrix. Langmuir 32, 2928–2936 (2016). https://doi.org/10.1021/acs.langmuir.5b04560 Jähnert, S., Müter, D., Prass, J., Zickler, G.A., Paris, O., Findenegg, G.H.: Pore structure and fluid sorption in ordered mesoporous silica. I. Experimental study by in situ small-angle X-ray scattering. J. Phys. Chem. C 113, 15201–15210 (2009). https://doi.org/10.1021/jp8100392 Jain, S.K., Pellenq, R.J.-M., Gubbins, K.E., Peng, X.: Molecular modeling and adsorption properties of ordered silica-templated CMK mesoporous carbons. Langmuir 33, 2109–2121 (2017). https://doi.org/10.1021/acs.langmuir.6b04169 Joo, S.H., Ryoo, R., Kruk, M., Jaroniec, M.: Evidence for general nature of pore interconnectivity in 2-dimensional hexagonal mesoporous silicas prepared using block copolymer templates. J. Phys. Chem. B 106, 4640–4646 (2002). https://doi.org/10.1021/jp013583n Jun, S., Joo, S.H., Ryoo, R., Kruk, M., Jaroniec, M., Liu, Z., et al.: Synthesis of new, nanoporous carbon with hexagonally ordered mesostructure. J. Am. Chem. Soc. 122, 10712–10713 (2000). https://doi.org/10.1021/ja002261e Kiusalaas, J.: Numerical methods in engineering with Python 3. University Press, Cambridge (2013) Koczwara, C., Rumswinkel, S., Prehal, C., Jäckel, N., Elsässer, M.S., Amenitsch, H., et al.: In situ measurement of electrosorption-induced deformation reveals the importance of micropores in hierarchical carbons. ACS Appl. Mater. Interface 9, 23319–23324 (2017). https://doi.org/10.1021/acsami.7b07058 Kresge, C.T., Leonowicz, M.E., Roth, W.J., Vartuli, J.C., Beck, J.S.: Ordered mesoporous molecular sieves synthesized by a liquid-crystal template mechanism. Nature 359, 710–712 (1992). https://doi.org/10.1038/359710a0 Kruk, M., Jaroniec, M., Sayari, A.: Application of large pore MCM-41 molecular sieves to improve pore size analysis using nitrogen adsorption measurements. Langmuir 13, 6267–6273 (1997). https://doi.org/10.1021/la970776m Lépinay, M., Broussous, L., Licitra, C., Bertin, F., Rouessac, V., Ayral, A., Coasne, B.: Predicting adsorption on bare and modified silica surfaces. J. Phys. Chem. C 119, 6009–6017 (2015). https://doi.org/10.1021/jp511726a Morishige, K., Nakahara, R.: Capillary condensation in the void space between carbon nanorods. J. Phys. Chem. C 112, 11881–11886 (2008). https://doi.org/10.1021/jp8027403 Neimark, A.V., Ravikovitch, P.I.: Capillary condensation in MMS and pore structure characterization. Microporous Mesoporous Mater. 44–45, 697–707 (2001). https://doi.org/10.1016/S1387-1811(01)00251-7 Neimark, A.V., Ravikovitch, P.I., Vishnyakov, A.: Bridging scales from molecular simulations to classical thermodynamics: density functional theory of capillary condensation in nanopores. Microporous Mesoporous Mater. 15, 347–365 (2003). https://doi.org/10.1088/0953-8984/15/3/303 Osborn, WR., Yeomans, JM.: Wetting on lines and lattices of cylinders. Phys. Rev. E 51, 2053–2058 (1995). https://doi.org/10.1103/PhysRevE.51.2053 Philip, J.R.: Adsorption and geometry: the boundary layer approximation. J. Chem. Phys. 67, 1732–1741 (1977a). https://doi.org/10.1063/1.435056 Philip, J.R.: Unitary approach to capillary condensation and adsorption. J. Chem. Phys. 66, 5069–5075 (1977b). https://doi.org/10.1063/1.433814 Putz, F., Morak, R., Elsaesser, M.S., Balzer, C., Braxmeier, S., Bernardi, J., et al.: Setting directions: anisotropy in hierarchically organized porous silica. Chem. Mater. 29, 7969–7975 (2017). https://doi.org/10.1021/acs.chemmater.7b03032 Ravikovitch, P.I., Haller, G.L., Neimark, A.V.: Density functional theory model for calculating pore size distributions: pore structure of nanoporous catalysts. Adv. Colloid Interace. Sci. 76–77, 203–226 (1998). https://doi.org/10.1016/S0001-8686(98)00047-5 Ravikovitch, P.I., Neimark, A.V.: Calculations of pore size distributions in nanoporous materials from adsorption and desorption isotherms. In: Nanoporous Materials II, Proceedings of the 2nd Conference on Access in Nanoporous Materials; Studies in Surface Science and Catalysis, vol. 129, pp. 597–606. Elsevier, Amsterdam (2000) Restagno, F., Bocquet, L., Crassous, J., Charlaix, E.: Slow kinetics of capillary condensation in confined geometry: experiment and theory. Coll. Surf. A 206, 69–77 (2002). https://doi.org/10.1016/S0927-7757(02)00073-0 Ryoo, R., Joo, S. H., Jun, S., Tsubakiyama, T., Terasaki, O.: Ordered mesoporous carbon molecular, sieves by templated synthesis: the structural varieties. In: Zeolites and Mesoporous Materials at the dawn of the 21st century, Proceedings of the 13th International Zeolite Conference, Studies in Surface Science and Catalysis, vol. 135, p. 150. Elsevier, Amsterdam (2001) Saam, W.F., Cole, M.W.: Excitations and thermodynamics for liquid-helium films. Phys. Rev. B 11, 1086–1105 (1975). https://doi.org/10.1103/PhysRevB.11.1086 Silvestre-Albero, A., Silvestre-Albero, J., Martínez-Escandell, M., Futamura, R., Itoh, T., Kaneko, K., Rodríguez-Reinoso, F.: Non-porous reference carbon for N2 (77.4 K) and Ar (87.3 K) adsorption. Carbon 66, 699–704 (2014). https://doi.org/10.1016/j.carbon.2013.09.068 Solovyov, L.A., Shmakov, A.N., Zaikovskii, V.I., Joo, S.H., Ryoo, R.: Detailed structure of the hexagonally packed mesostructured carbon material CMK-3. Carbon 40, 2477–2481 (2002). https://doi.org/10.1016/S0008-6223(02)00160-4 Yelpo, V., Cornette, V., Toso, J.P., López, R.H.: Characterization of nanostructured carbon CMK-3 by means of Monte Carlo simulations. Carbon 121, 106–113 (2017). https://doi.org/10.1016/j.carbon.2017.05.085 Zhao, D., Huo, Q., Feng, J., Chmelka, B.F., Stucky, G.D.: Nonionic Triblock and Star Diblock copolymer and oligomeric surfactant syntheses of highly ordered, hydrothermally stable, mesoporous silica structures. J. Am. Chem. Soc. 120, 6024–6036 (1998). https://doi.org/10.1021/ja974025i Zickler, G.A., Jähnert, S., Wagermaier, W., Funari, S.S., Findenegg, G.H., Paris, O.: Physisorbed films in periodic mesoporous silica studied by in situ synchrotron small-angle diffraction. Phys. Rev. B 73, 17 (2006). https://doi.org/10.1103/PhysRevB.73.184109 Open access funding provided by Austrian Science Fund (FWF). LL acknowledges a scholarship from the Marshall Plan Foundation for a three month stay at NJIT. We acknowledge financial support from the Austrian Science Foundation FWF (award No. I 1605-N20) and the German Science Foundation DFG (award No. RE1148/10-1) in the framework of the DACH agreement. Institute of Physics, Montanuniversität Leoben, Franz-Josef Strasse 18, 8700, Leoben, Austria Lukas Ludescher & Oskar Paris Otto H. York Department of Chemical and Materials Engineering, New Jersey Institute of Technology, University Heights, Newark, NJ, 07102, USA Lukas Ludescher & Gennady Y. Gor Bavarian Center for Applied Energy Research, Magdalene-Schoch-Str. 3, 97074, Wuerzburg, Germany Stephan Braxmeier, Christian Balzer & Gudrun Reichenauer Chemistry and Physics of Materials, Paris Lodron University Salzburg, Jakob-Haringer Str. 2a, 5020, Salzburg, Austria Florian Putz & Nicola Hüsing Lukas Ludescher Stephan Braxmeier Christian Balzer Gudrun Reichenauer Florian Putz Nicola Hüsing Gennady Y. Gor Oskar Paris All authors contributed to the study conception and design. Material synthesis and SEM measurements were performed by FP and NH. Adsorption isotherms of samples and reference isotherms were measured by CB, SB and GR. Small angle x-ray scattering measurements were performed by LL. Theoretical calculations and numerical implementation was performed by LL, advised by GG and OP. The manuscript was written by LL and OP. All authors read and approved the final manuscript. Correspondence to Gennady Y. Gor or Oskar Paris. Appendix A: Computational details In Sect. 2, the Euler–Lagrange equations of the grand potential were developed, which yielded two second order, non-linear ordinary differential equations (ODE), Eqs. 5 and 7. Solving these ODEs with the appropriate boundary conditions (Eq. 9 for the separated phase and Eq. 10 for the "bridged" phase), results in profiles satisfying the Euler–Lagrange equation for the grand potential. Because the boundary conditions force the according solutions, they do not necessarily represent global, but at least local minima of the grand potential. Therefore, it is important to start with a physical reasonable "educated guess" for the resulting profile, which is rather simple for the separated phase, but more ambiguous for the "bridged" phase. A custom written Python code (version 3.6) employing the Anaconda package (version 3.6.5) was used to numerically solve the ODEs. The two methods used to solve boundary value problems are the shooting and finite difference method (Kiusalaas 2013). Because the finite-difference method provides fast convergence and stability of results compared to the shooting method in two-point boundary value problems (Kiusalaas 2013), the former was applied in this paper. The main idea of this approach is to minimize a residual vector \(R\) which is a numerical function of a discretized profile vector \(L\). To construct a residual and profile vector \(R\) and \(L\), the differential equation needs to be discretized by constructing a mesh of \(n\) nodes with a step width \(\Delta \theta \), where the differentials in Eqs. 5 and 7 were defined by the central difference scheme. At the \(ith\) node for \(0<i<n\) of the mesh the first (\({l}_{\theta ,i}={(l}_{i+1}-{l}_{i-i})/2\Delta \theta \)) and second derivative (\({l}_{\theta \theta }=({l}_{i-1}-2 {l}_{i}+{l}_{i+1})/\Delta {\theta }^{2}\)) are uniquely determined. At the boundaries \(i=0\) and \(i=n\), the boundary conditions are clearly defined by Eqs. 9 and 10. As the first derivative needs to be zero at both boundaries, the nodal values must be identical to their neighbors (hence \({l}_{-1}={l}_{1}={l}_{0}\) and \({l}_{n-1}={l}_{n+1}={l}_{n}\) if the solution is symmetric to the left and right of the boundaries). With this approach, the differential equation is only dependent on the profile vector \(L\) (and therefore on the discrete values \({l}_{i}\)) and \(\Delta \theta \). To numerically solve equations Eqs. 5 and 7, they need to be rewritten to the following form: $$ l_{\theta \theta } = F\left( {\theta ,l,l_{\theta } } \right), $$ where \(F\) and \({l}_{\theta \theta }\) can be rewritten in a discretized form with the definitions of \({l}_{\theta }\) and \({l}_{\theta \theta }\) provided above. With these definitions the residual vector \(R\) of the differential equations can be defined as a function of the profile vectors components \({l}_{i}\) and the step width of the mesh \(\Delta \theta \): $$ R_{i} = l_{i - 1} - 2 l_{i} + l_{i + 1} - \Delta \theta^{2} F\left( {\theta_{i = 0} + i \Delta \theta ,l_{i} ,\frac{{l_{i + 1} - l_{i - 1} }}{{2 {\Delta }\theta }}} \right) $$ Suppling an initial solution to Eq. 18 results in a numerical value of the residual vectors entries which deviate from 0. Hence, if the entries of the residual vector \(R\) are close or equal to 0, the differential equation defined by \(F\) is solved by the profile vector \(L\). A minimization algorithm, such as a least-squares algorithm, minimizes the residual by adjusting the supplied initial solution vector. If the differential equation being solved is linear, a singular minimization operation is sufficient. In the case of non-linear differential equations iterations have to be performed to find a solution. The 'least-squares' function of the Python-module 'SciPy' only needs a functional definition of the residual (Eq. 18) and an initial solution provided to solve a non-linear differential equation. For an educated guess of the initial solution for the "bridged" phase, we employed a cosine function with a linear transform of the input variable θ to ensure that the derivative of the initial solution satisfies the boundary conditions (Eqs. 9 and 10), effectively compressing the cosine in an interval from 0 to π into the respective bounds. The values at \(l(\theta =0)\) and \(l(\theta =\pi /3)\) were chosen as follows: It can be assumed that the film thickness \(h\) of the "bridged" solution at the largest angle π/3 is close to the thickness in the case of adsorption on a single cylinder. This value for \(h(p/{p}_{0},\theta =\pi /3)\) can be easily obtained by solving Derjaguin's equation for adsorption on a single cylinder. For \(l(\theta =0)\), we assumed the bridge having a thickness of twice the value of the film thickness \(h\) at the angle of π/3. For low relative pressures and consequently low fillings of the void space, the input argument of the compressed cosine was distorted with a power law \(\propto \mathrm{c}\mathrm{o}\mathrm{s}({x}^{z})\), which results in a distorted shape of the cosine. For the "separated" phase, convergence is easily achieved by choosing a flat profile within physically reasonable bounds. To obtain these bounds, it is of great advantage to solve the Derjaguin's equation for a single cylinder for the relative pressure under consideration and use the corresponding film thickness \(h(p/{p}_{0})\) as starting value. Appendix B: Experimental details B1: Synthesis of the carbon sample The sample investigated was a monolithic carbon sample with hierarchical porosity. It consists of a macroporous network of struts (Fig. 8a), with each strut comprising a 2D hexagonal arrangement of carbon nanorods leaving a mesopore space between the rods resembling the geometry sketched in Fig. 1. The carbon monolith was synthesized via a nanocasting approach using silica monoliths with hierarchical porosity (Brandhuber et al. 2005; Putz et al. 2017) as template. The nanocasting procedure consisted of the infiltration of the cylindrical silica mesopores with a carbon precursor, its carbonization at 850 °C and finally the silica template removal by HF etching. As the material was synthesized for applications as supercapacitor electrodes, it was activated with carbon dioxide at 925 °C for 30 min to increase the microporosity within the carbon nanorods. The micropores are located within the carbon nanorods, being responsible for the steep increase of the adsorption isotherms at low pressures (Figs. 5b and 6b). They should however not influence the adsorption data from the mesopores at larger relative pressures A detailed description of the sample synthesis is given in Ref. (Koczwara et al. 2017). B2: Determination of the D/r ratio using SAXS The ordered mesopore structure of the sample was characterized by Small-angle X-ray scattering (SAXS) using a laboratory SAXS instrument (Nanostar, Bruker AXS, Karlsruhe). Figure 8b shows the SAXS pattern of the sample in a double logarithmic scale. Sharp Bragg reflections at positions qhk confirm the 2D hexagonal order of the carbon nanorods, with \({q}_{hk}=\frac{4\pi \mathrm{s}\mathrm{i}\mathrm{n}({\theta }_{hk})}{\lambda }\), θ being half the scattering angle, λ the X-ray wavelength, and hk are the Miller Indices of the lattice. The diffuse scattering contributions below the Bragg peaks can mainly be attributed to scattering from the disordered micropores in the sample, which are however not relevant for the following. From the peak positions the lattice parameter (corresponding to the rod distance D) can be calculated by \(D=\frac{4\pi }{\sqrt{3}{q}_{hk}}\sqrt{{h}^{2}+{k}^{2}+hk}\), giving a value D = (10.1 ± 0.1) nm for the present sample. a Scanning electron microsopy (SEM) image of the macroporous structure of the monolithic CMK-3 type carbon material. b Small-angle X-ray scattering profile of the material. Four distinct peaks with Miller indices (10), (11), (20), (30) are distinguishable To estimate the radius of carbon nanorods from the SAXS data, the integrated SAXS intensities of the four observable Bragg reflections were analyzed by a double-shell cylindrical form factor model outlined in detail in Refs. (Zickler et al. 2006; Jähnert et al. 2009). As the sample is essentially the inverse replica of a silica sample with hexagonally arranged cylindrical mesopores, the SAXS model of a cylindrical nanorod with a rough (or microporous) corona can be applied. The fit delivers 3 parameters, i.e., an inner (R1), an outer radius (R0), and a relative density ρc of the corona, from which a "equivalent nanorod radius" r = \(\sqrt{{R}_{1}^{2}+{\rho }_{c}\left({R}_{0}^{2}-{R}_{1}^{2}\right)}\) can be deduced (Jähnert et al. 2009). With the values \({R}_{1}=1.94 nm\), \({R}_{0}=4 nm\) and \({\rho }_{c}=0.9\) obtained from the fit, the equivalent nanorod radius determined for the present sample is r = (3.9 ± 0.4) nm, resulting in a ratio D/r = 2.6 ± 0.05. In the present geometry, the value \({\rho }_{c}=0.9\) is directly associated with the amount of carbon present in the corona, which means that the carbon nanowires are quite dense and well defined. B3: Adsorption isotherms Adsorption isotherms of the hierarchical carbon monolith were obtained for nitrogen at 77 K and n-pentane at 290 K, using a commercial volumetric adsorption instrument (ASAP2020, Micromeritics). Prior to the analysis, the sample was degassed at 300 °C within the sample holder for several hours. The adsorption isotherms are shown in Fig. 9. A slight hysteresis can be observed for both adsorbates in the region where the "bridged" phase is supposed to be stable. In addition, a n-pentane reference isotherm was measured to obtain the interaction parameters for n-pentane with carbon, by using a sample of thermally annealed carbon xerogel. Details on the synthesis and further characterization of this sample can be found in Ref. (Balzer et al. 2011). The measurement was performed with an ASAP2020 (Micromeritics) at 273 K. Prior to the measurement the sample was degassed at 300 °C. Although this sample does not deliver an ideal reference isotherm of a purely non-porous material, the very low microporosity (0.01 cm3/g), and the large radius of the mesopores (Balzer 2018) provides a satisfactory estimate of the interaction parameters. We mention that the theoretical model was parameterized based on these n-pentane reference data measured at 273 K, while applied to the adsorption data measured on the CMK-3-type sample at 290 K. We believe that this approximation is acceptable, since the interaction parameter of alkanes only show a weak temperature dependence (Croucher and Hair 1977). In a and b the full adsorption isotherms of nitrogen and n-pentane respectively are shown. Closed symbols denote the adsorption, empty symbols the desorption branch of the isotherm Appendix C: Comparison with the Kelvin-Cohan equation and with a simplified analytical model First we compare the results reported in Figs. 5 and 6 with a simple analysis using the Kelvin-Cohan equation (Cohan 1938; Neimark et al. 2003). In their first paper on CMK-3 materials, Jun et al. (2000) used this approach to obtain the CMK-3 pore size based on earlier work on MCM-41 silica (Kruk et al. 1997), by interpreting this size as the diameter of the cylinder fitting between three hexagonally arranged rods. $$ - \frac{{R_{g} T}}{{v_{l} }}ln\left( {\frac{p}{{p_{0} }}} \right) = \frac{{\gamma}}{{r_{u}^{*} - h}}. $$ In their work, the film thickness \(h\) was adjusted with an additive factor 0.3 nm to correct for inaccuracies in the Harkins–Jura equation (Boer et al. 1966). Since the interaction between nitrogen/pentane and carbon was determined directly in the present work using reference isotherms, we employ here directly the values \(h(p/{p}_{0})\) given in Fig. 2. The maximum inscribed radius \({r}_{u}^{*}\) between three cylindrical rods is calculated by Eq. 15 using \(D\) and \(r\) from SAXS (Appendix B2), which gives \({r}_{u}^{*}=1.93 nm\). Plugging this value into Eq. 19 in conjunction with Eq. 11, the pressure of capillary condensation is \(p/{p}_{0}=0.68\), which is very close to the value of the proposed "bridged-to-filled" transition for nitrogen found in Fig. 5. For n-pentane, the pressure of capillary condensation predicted by Eq. 19 is \(p/{p}_{0}=0.57\), which is again close to the value predicted theoretically and found experimentally in Fig. 6. The gap between two neighboring carbon nanowires can be approximated by adsorption between two flat carbon surfaces with distance \(2{r}_{gap}=D-2r\). Again, the classical Kelvin-Cohan equation can deliver approximate results (Restagno et al. 2002) for relative pressures of capillary condensation if the \(D/r\)- ratio is close to 2. The pressure of bridge formation is thus calculated to be \(p/{p}_{0}=0.5\) for N2 and \(p/{p}_{0}=0.3\) for n-pentane. These values are somewhat larger than the ones in Figs. 5 and 6. We note that for the actual carbon, \(D/r=2.6\), and the approximation of two flat carbon surfaces will only be a very rough approximation. Yet, this simple analysis proves that the classical Kelvin-Cohan equation (Eq. 19) gives comparable values for adsorption in the small gap between neighboring carbon nanowires, equivalent to a "separated-to-bridged" transition, and shows good agreement for capillary condensation in interstitial space between three hexagonally arranged nanowires, corresponding to a "bridged-to-filled" transition with the results from Figs. 5 and 6. To shed more light on the model of adsorption introduced in Sect. 2, a simplified model allowing an analytical solution was adapted (Osborn and Yeomans 1995), which should still capture the essential physics. In this model, summarized in Fig. Sketch of the simplified "bridged" phase model. The profile is shown as blue closed curve. The two radii \(a\) and \({r}_{c}=r+h\) (red dashed circles), and the angle \(\theta \) at which they meet, define the profile unambiguously (Color figure online) 10, the "bridged" profile is defined by two radii of curvature, one being \({r}_{c}=r+h\), and the other being \(a={\gamma}/\Delta \tilde{\mu}\), the two being related by \(cos\left(\theta \right)=\frac{D}{2\left({r}_{c}+a\right)}\). With \(h\) calculated from Eq. 11 and \(\Delta \tilde{\mu }\) from Eq. 3, the profile is uniquely determined by the relative pressure \(p/{p}_{0}\). The grand potential of this approximated "bridged"-profile is explicitly given by (Osborn and Yeomans 1995): $$ {\Omega }\left( {\frac{p}{{p_{0} }}} \right) = 6\gamma \left( {\left( {r + h} \right)\left( {\frac{\pi }{6} - \theta } \right) + \left( {\frac{\pi }{2} - \theta } \right)a} \right) + 6 {\Delta }\tilde{\mu }\left( {\frac{{\left( {2rh + h^{2} } \right)}}{2}\left( {\frac{\pi }{6} - \theta } \right) + \frac{{D^{2} }}{8}\tan \left( \theta \right) - \frac{{a^{2} }}{2}\left( {\frac{\pi }{2} - \theta } \right) - \frac{{r^{2} }}{2}\theta } \right), $$ where the first part describes the contribution from the liquid–vapor interface and the second part takes the change in the potential due to the liquid adsorbed into account. For the separated phase, the grand potential reads as: $$ {\Omega }\left( {\frac{p}{{p_{0} }}} \right) = \gamma \left( {r + h} \right)\pi + 6{\Delta}{\mu} \left( {\left( {r + h} \right)^{2} - r^{2} } \right)\frac{\pi }{4} . $$ We note that in this model, the contribution of the disjoining pressure term taking the solid–liquid interaction into account is omitted (compare with Eq. 2). As a consequence, only vapors with very weak, short range interaction (ideally \(m\sim 3\) in Eq. 11) with the substrate can be satisfactorily modelled. Consequently, only nitrogen adsorption (\(m\sim 2.5)\) is considered in the following. A phase diagram similar to Fig. 5a was constructed for nitrogen using Eqs. 20 and 21 (Fig. 11). The overall shape of the phase diagram is close to Fig. 5a, with some deviations due to the fixed geometry of the bridge profile especially at higher relative pressures. In Fig. 11 we also included the predictions for capillary condensation using Eq. 19 in the gap between neighboring nanowires as a black dashed line and in the interstitial between three carbon nanowires as a blue dashed line. The results align well for the "bridged-to-filled" transition, yet they deviate significantly for the "separated-to-bridged" transition with increasing \(D/r-\) ratio as expected. Calculated phase diagram for nitrogen in CMK-3-like carbon at 77.4 K showing the "separated" phase (green), the "bridged" phase (white), and the "filled" phase (red) using the simplified model (Eqs. 20, 21 and 8). The dashed lines correspond to pressures of capillary condensation for the "separated-to-bridged" (black) and "bridged-to-filled" (blue) transitions using the Kelvin-Cohan Equation (Eq. 19) (Color figure online) Ludescher, L., Braxmeier, S., Balzer, C. et al. Capillary bridge formation between hexagonally ordered carbon nanorods. Adsorption 26, 563–578 (2020). https://doi.org/10.1007/s10450-020-00215-6 Revised: 24 February 2020 Issue Date: May 2020 Capillary bridges Adsorption isotherm Ordered mesoporous carbon CMK-3
CommonCrawl
What is the motivation for using Calabi-Yau manifolds in string theory? I have just begin to study Calabi-Yau compactification. Looking in many book I found that, if we start with a critical superstring theory in $D=10$, we are in search of a compact $D=6$ Calabi-Yau manifold, i.e. a manifold with a spinor that is transported parallely. We do this because we want to conserve some supersymmetry. The thing that I did not understand is why we search only this kind of manifold (which, I read, exists in every even dimension) for the compactified, six-dimensional part of the space time. Why we do not care of the supersymmetry of the remaining four dimensional, "physical" non-compact, spacetime? string-theory supersymmetry compactification calabi-yau MaPoMaPo $\begingroup$ Possible duplicates: physics.stackexchange.com/q/4972/2451 , physics.stackexchange.com/q/13945/2451 , physics.stackexchange.com/q/24540/2451 , physics.stackexchange.com/q/179563/2451 , physics.stackexchange.com/q/10495/2451 and links therein. $\endgroup$ $\begingroup$ That post is a bit too generical. I know why we need CY compactification. What I do not understand is why, if we want to preserve putative phenomenological SUSYs we do not care about the phenomenologically relevant part of the space time? $\endgroup$ – MaPo The reason is clearly given in the famous paper "Vacuum configurations for superstrings" - http://www.sciencedirect.com/science/article/pii/0550321385906029 -. Here, I am just copying the introduction of that paper. I cannot tell the reason why with better words. Recently, the discovery [6] of anomaly cancellation in a modified version of d = 10 supergravity and superstring theory with gauge group $O(32)$ or $E_8 \times E_8$ has opened the possibility that these theories might be phenomenologically realistic as well as mathematically consistent. A new string theory with $E_8 \times E_8$ gauge group has recently been constructed [7] along with a second $O(32)$ theory. For these theories to be realistic, it is necessary that the vacuum state be of the form $M_4 \times K$, where $M_4$ is four-dimensional Minkowski space and K is some compact six-dimensional manifold. (Indeed, Kaluza-Klein theory- with its now widely accepted interpretation that all dimensions are on the same logical footing - was first proposed [8] in an effort to make sense out of higher-dimensional string theories). Quantum numbers of quarks and leptons are then determined by topological invariants of K and of an $O(32)$ or $E_8 \times E_8$ gauge field defined on K [9]. Such considerations, however, are far from uniquely determining K. In this paper, we will discuss some considerations, which, if valid, come very close to determining K uniquely. We require (i) The geometry to be of the form $H_4 \times K$, where $H_4$ is a maximally symmetric spacetime. (ii) There should be an unbroken N = 1 supersymmetry in four dimensions. General arguments [10] and explicit demonstrations [11] have shown that supersymmetry may play an essential role in resolving the gauge hierarchy or Dirac large numbers problem. These arguments require that supersymmetry is unbroken at the Planck (or compactification) scale. (iii) The gauge group and fermion spectrum should be realistic. These requirements turn out to be extremely restrictive. In previous ten-dimensional supergravity theories, supersymmetric configurations have never given rise to chiral fermions- let alone to a realistic spectrum. However, the modification introduced by Green and Schwarz to produce an anomaly-free field theory also makes it possible to satisfy these requirements. We will see that unbroken N = 1 supersymmetry requires that K have, for perturbatively accessible configurations, $SU(3)$ holonomy* and that the four-dimensional cosmological constant vanish. The existence of spaces with $SU(3)$ holonomy was conjectured by Calabi [12] and proved by Yau [13]. AccidentalFourierTransform John DoeJohn Doe The root of it the so-called anomaly, which makes superstring theories mathematically inconsistent. The only dimension where it may vanish is the "critical" one (10), but that in itself is not enough if the global topology is non-trivial. And it has to be non-trivial to compactify the extra dimensions. Since the physical spacetime is topologically trivial the issue reduces to the compactified part only, and implies that its first Chern class must vanish, making it Calabi-Yau. ConifoldConifold Not the answer you're looking for? Browse other questions tagged string-theory supersymmetry compactification calabi-yau or ask your own question. Why (in relatively non-technical terms) are Calabi-Yau manifolds favored for compactified dimensions in string theory? Could the 6 extra dimensions in superstring theory be a product of two manifolds? Why do Calabi-Yau manifolds crop up in string theory, and what their most useful and suggestive form? Why is Compactification restricted to Toroids, Calabi-Yau et al? Need explanation for $CY_3$ folds comes first rather than algebraic curves comes first Alternatives to Calabi-Yau spaces? Question about Calabi-Yau manifolds and quantum fluctuations at the Planck scale Why F-theory picks Calabi-Yau manifolds as backgrounds? What happens if the holonomy group lies in $SU(2)$ for a CY 3-fold? What is the need to consider a singular spacetime? Questions about the landscape in string theory Why is the central charge $c=9$ supersymmetry in the internal manifold? Motivation of Keeping Supersymmetry in String Compactification
CommonCrawl
Why is there an emphasis on tensor equations in GR? In my understanding the purpose of using tensor equations in GR is to ensure that they are true in all coordinate systems. I understand that writing equations tensorially ensures this will be the case; however, are there not non-tensor equations that would also be true in all coordinate systems? For example, one can define a tensor by its components and how they transform from one coordinate system to another (the tensor transformation law). It seems to me that you could define some other quantity that transforms according to another transformation law, and that equations written in this quantity would also be valid in all coordinate systems. I've also seen tensors defined as geometric objects on the manifold that act as linear forms on the tangent and cotangent spaces on the manifold. This geometric definition immediately guarantees coordinate independence. Again, I don't see why we can't define a more general geometric object (i.e., not a tensor) and make that the basis of our coordinate independent equations. To summarise, why is there an emphasis on tensor equations in GR when it seems to me that there should be plenty of non-tensor equations that are valid in all coordinate systems as well? EDIT: As an example, consider some arbitrary mapping from the tangent space to the reals that is not linear in the tangent vectors. This is a coordinate independent definition. The only difference between these objects and tensors is that for tensors, the mapping is linear. I suppose non-linearity means that these objects won't have straightforward, easily interpretable 'components' in each coordinate system, but I don't see why we still couldn't make important statements about the geometry of spacetime using them. general-relativity differential-geometry tensor-calculus covariance Andrew Andrew $\begingroup$ What do you want to do with these "other quantities"? $\endgroup$ – Ihle Jul 31 '17 at 6:47 $\begingroup$ I think, rather than hypothesising these objects, you should try to give examples of them. $\endgroup$ – tfb Jul 31 '17 at 6:50 $\begingroup$ I think the question is not just limited to GR, but to any field that uses tensors, for e.g. fluid mechanics. $\endgroup$ – Deep Jul 31 '17 at 7:34 $\begingroup$ Is the nonlinear function a single function that is set for once and for all, applying to every object within a certain class of objects? Or is the nonlinear function different for every function in a particular class? $\endgroup$ – Ben Crowell Jul 31 '17 at 15:06 There are coordinate-independent objects that aren't tensors. Connections, densities, spinors, sections of fiber bundles in general etc. Tensors however are related to manifold geometry (contrast this with sections of an arbitrary vector bundle) have linear dependence on directions. I'm going to illustrate this with the same example Wald uses in his GR book. Imagine a magnetic field $B$ permeating space. You have a detector that measures the magnetic field in the direction the detector probe is pointing. How do you measure the magnetic field at point $x$? You choose and record three linearly independent probe orientations. Since the probe probably uses the same units in all directions, and has the same sensitivity, all three orientations can be taken to be unit vectors. Let the three directions be $e_1,e_2$ and $e_3$. You take the three measurements, these return the values $$ B_1=B(e_1),\ B_2=B(e_2),\ B_3=B(e_3). $$ As you can see, the magnetic field plays the role of a covector here, instead of a vector. From this, you can assemble the magnetic field as $$ B=B_1 e^1+B_2 e^2+B_3e^3, $$ where $e^i$ is the dual basis element to $e_i$. The metric tensor is $$ g_{ij}=\left(\begin{matrix} 1 && \cos\alpha_{12} && \cos\alpha_{13} \\ \cos{\alpha_{12}} && 1 && \cos{\alpha_{23}} \\ \cos{\alpha_{13}} && \cos{\alpha_{23}} && 1\end{matrix}\right) $$ where $\alpha_{ij}$ is the angle between $e_i$ and $e_j$. The magnetic field vector is given by $\sharp B=g^{1i}B_ie_1+g^{2i}B_ie_2+g^{3i}B_ie_3$, the magnitude is given by $||B||=g^{ij}B_iB_j$ etc. Now, if instead of having a linear dependence on directions, $B$ was some arbitrary smooth function $B:T_xM\rightarrow\mathbb{R}$, then you'd need an infinite amount of measurements (in an infinite amount of directions) to reconstruct it at a point. Clearly, these "direction-dependent" quantities in physics behave in such a way that you don't need an infinite amount of measurements to measure them at a point. If they did, physics as we know it would not exist! So the reason we use tensors is that physics is measurable. Bence RacskóBence Racskó $\begingroup$ I am pretty sure that, given linear functions $\left\{\omega_n: T_xM \to \mathbb{R}\right\}$ such that, for $\vec{v}\in T_xM$, $\lim_{n\to\infty}\sum_{n=0}^m \left(\omega_n(\vec{v})\right)^n$ converges (note no ESC here!), then, well, that's a Taylor series, and so you can express quite general nonlinear functions using a set of tensors. But you need an infinite set in general, which is the point you were making. $\endgroup$ – tfb Jul 31 '17 at 12:01 $\begingroup$ I'll give you +20 if you can find me a probe that uses different units when oriented in different directions. $\endgroup$ – Jim Jul 31 '17 at 13:33 $\begingroup$ In addition to the examples you give like Christoffel symbols and tensor densities, I guess one can simply build inhomogeneous n-tuples, e.g., an object $(f,\omega)$ that is an ordered pair built out of a scalar and a covector. $\endgroup$ – Ben Crowell Jul 31 '17 at 15:09 $\begingroup$ Thanks for the answer. One thought: the Einstein tensor is usually motivated by being the only tensor formed of the metric and its first and second derivatives. We seem to be really lucky that the way matter curves spacetime is perfectly encapsulated by a tensor equation! If it were a bit more complicated (non-tensorial), we would seem to have no hope of measuring this relationship easily. (Granted, SR teaches us that energy-momentum is a tensor quantity, so maybe that would motivate the tensorial nature of the curvature measure on the LHS of Einstein's Field Equations) $\endgroup$ – Andrew Aug 1 '17 at 8:37 Good question, and one that isn't commonly gone into in the physics literature when they introduce the tensor transformation law. Try visualising it geometrically: take a simple example, deforming the surface of a sphere to an ellipsoid; if we take a small (ie infinitesimal) patch of the ball and see how it transforms we see that there is a multilinear dependence; this example can be generalised to arbitrary manifolds in any cartesian space, and we can also, with some thought, drop the embedding. A multilinear transformation is characterised universally by tensors, and then by taking bases we get the usual coordinate transformation property that characterises tensors common in the physics literature. The best reference I've seen for this is Lees book on Differential Geometry, and Dodsons Tensor Geometry, though he tends to use some idiosyncratic terminology. Tensors by themselves don't ensure that a formula is correct in all coordinate systems. Navier-Stokes equations for example can be written in tensor form bu are not coordinate independent. What you need in fact is the covariant derivative property. The tensor form on the other hand is needed to describe stresses on a surface. To describe the shear stress on a surface for example, you need one vector lying in the surface for describing the force on that surface with magnitude and direction. But then you need another vector to describe the position and orientation of the surface itself- hence the two indices of the second rank tensor. RiadRiad Not the answer you're looking for? Browse other questions tagged general-relativity differential-geometry tensor-calculus covariance or ask your own question. What is the covariant derivative in mathematician's language? Do the equations of general relativity apply to all coordinate systems? Tensors defined by transformation laws are tensors at a vector space or tensor fields? The electromagnetic field tensor in curvilinear coordinate systems Question about the physical meaning of tensors: why are they used in physics? Doubt about a discussion of Tensors and coordinate independence The matrix of the Lorentz transformation is or isn't a tensor? It is correct to say that a tensor is simply a multidimensional array of related quantities? But what about a tensor as a transformation? Gauge dependence of the Einstein tensor and the Riemann/Ricci curvature tensors in non-linear general relativity Dual space and Metric tensor
CommonCrawl
Universal optical setup for phase-shifting and spatial-carrier digital speckle pattern interferometry Sijin Wu ORCID: orcid.org/0000-0002-9722-91481, Mingli Dong1, Yao Fang1 & Lianxiang Yang1,2 Digital speckle pattern interferometry (DSPI) is a competitive optical tool for full-field deformation measurement. The two main types of DSPI, phase-shifting DSPI (PS-DSPI) and spatial-carrier DSPI (SC-DSPI), are distinguished by their unique optical setups and methods of phase determination. Each DSPI type has its limited ability in practical applications. We designed a universal optical setup that is suitable for both PS-DSPI and SC-DSPI, with the aim of integrating their respective advantages, including PS-DSPI's precise measurement and SC-DSPI's synchronous measurement, improving DSPI's measuring capacity in engineering. The proposed setup also has several other advantages, including a simple and robust structure, easy adjustment and operation, and versatility of measuring approach. Deformation measurement, especially three-dimensional (3D) deformation measurement, is essential to the quantitative description of object change and the accurate determination of mechanical properties. Traditionally, deformation measurement is carried out by the use of displacement transducers, such as strain gauges [1]. However, displacement transducers suffer from the disadvantage of being a spot measurement technique, which leads to low spatial resolution and insufficient information for full-field deformation measurement. Optical techniques such as digital speckle pattern interferometry (DSPI) [2, 3], digital image correlation [4], and Moiré method [5] have become preponderant methods in the measurement of deformation for objects with rough surfaces due to their full-field, stand-off, and non-contact measurement nature. Moreover, optical methods, particularly DSPI, are also very precise tools. DSPI is mainly divided, based on their optical setups and interferometric phase extraction methods, into two categories: phase-shifting DSPI (PS-DSPI) and spatial-carrier DSPI (SC-DSPI). The SC-DSPI is also known as digital holographic interferometry [6, 7]. PS-DSPI utilizes the interference between an object beam from a measuring target and a reference beam from a fixed surface to measure the out-of-plane deformation, and the interference between object and reference beams from the measuring target via different paths to measure in-plane deformations [8, 9]. 3D deformation measurement is then realized by combining one optical setup for out-of-plane deformation measurement and two optical setups for in-plane deformation measurement together. The three channels are enabled in turn when performing the 3D measurement, resulting in asynchronous measurement of the 3D deformations. However, synchronous measurement of 3D deformations is desired in practical applications to enable the change and mechanical model of the measuring object to be characterized properly. Therefore, the inability of PS-DSPI to perform synchronous measurement limits its employment in practical engineering. Furthermore, PS-DSPI is usually unsuitable for dynamic measurement due to the amount of time consumed in the process of obtaining the interferometric phase. The dominant phase extraction method in PS-DSPI is the temporal phase shift, which carries out several phase shifts and requires the measuring target to be stationary during the phase shift [10]. Dynamic deformations are not easily measured, even if the time interval between adjacent phase steps is very short. Though other phase extraction methods, such as spatial phase shift [11] and phase of difference phase shift [12], have been used in DSPI to make dynamic deformation measurement possible, these methods are difficult to use, result in a more complicated system structure, and provide less reliable measurement results. Consequently, these fast phase extraction methods are rarely used in commercial PS-DSPI instruments. SC-DSPI also uses a multi-channel optical setup, usually a three-channel setup, to measure 3D deformations [13, 14]. The three channels work simultaneously, and three speckle interferograms are recorded in an image frame. The information of the three interferograms can be separated in the frequency domain, and their corresponding phase maps can later be calculated if proper spatial carrier frequencies are used [15]. The combination of the three phase maps allows the final 3D deformation to be obtained. SC-DSPI's measurement characteristics make synchronous measurement of 3D deformations possible because the three speckle interferograms are recorded together in one frame and the three phase maps are obtained simultaneously. Dynamic measurement of deformations is also possible because only one image frame is used to measure deformations, eliminating the need for a specified time interval [16]. The dynamic measurement speed depends on the camera frame rate. Though SC-DSPI outperforms PS-DSPI in terms of synchronous and dynamic measurement, its disadvantages include a lower-quality phase map [17], greater loss of laser energy, and much smaller measuring area, thus limiting its use in practical applications. PS-DSPI and SC-DSPI have their respective characteristics and are employed in different applications. However, their respective defects limit their wide use in engineering. Their area of application could be expanded if both techniques could be combined together. However, this idea is not easy to realize due to their distinct optical setup. We have built a universal optical setup for both PS-DSPI and SC-DSPI. 3D deformations can be measured by this optical setup using either PS-DSPI or SC-DSPI. Thus the flexibility of deformation measurement in engineering is fulfilled by the use of the proposed optical setup. The optical setup is also very simple, robust, and easy to use. Arrangement of universal optical setup The universal optical setup for PS-DSPI and SC-DSPI adopts a three-channel optical arrangement. Each channel consists of an object and reference beam pair derived from an individual laser. Components in each channel are almost the same, but the laser wavelength can be different. The incident angles, or illumination angles, of the three object beams striking the measuring target are artificially arranged to achieve optimal 3D deformation measurement results. The illumination angles will be discussed later. The optical arrangement of the universal optical setup is depicted in Fig. 1. Considering the similarity of the three channels, the optical arrangement of only one channel is described to show the optical interference process. The laser beam is divided into object and reference beams by a beam splitter. The object beam then strikes the measuring target after being expanded by a negative lens or other optical components or parts with similar function, such as a microscope objective. The scattered light from the target is collected by an imaging lens, such as an aspheric lens, then reaches the image sensor of the camera via an aperture. The aperture works as a regulator of light intensity in PS-DSPI mode and a filter of spatial frequency in SC-DSPI mode. The reference beam is coupled into an optical fiber via a piezoelectric-transducer-driven mirror. The elongation of the piezoelectric transducer (PZT) is automatically controlled by a computer to modulate the optical path of the reference beam, resulting in the phase shift in the PS-DSPI measurement. The emergent light from the fiber strikes the camera sensor at a small angle between it and the optical axis. This angle determines the carrier frequency, a key parameter in the SC-DSPI measurement. The object and reference beams encounter each other on the camera sensor, resulting in optical interference. The generated speckle interferograms are captured by the camera and recorded by the computer for further processing. The other two channels follow the same principle, but have different illumination angles and reference beam incident angles. The differences in the incident angles of the reference beams guarantee the separation of the interferometric signals from the three channels in the frequency domain, when the setup works in the SC-DSPI mode. The illumination angle differences among the three channels result in different displacement sensitivity coefficients. The combination of these displacement sensitivity coefficients forms a displacement sensitivity matrix with which the relationship between the 3D deformations and the interferometric phases obtained by PS-DSPI and SC-DSPI is built. The phase determination and deformation calculation procedures are discussed in the next section. Various illumination angle combinations among the three channels yield different displacement sensitivity matrices. Among these combinations, right-angle distribution and homogeneous distribution, described in Fig. 2, are the two simplest and optimal arrangements. In both types, the magnitudes of the illumination angles are equal, but the directions differ. Typical illumination layouts for the universal optical setup. a Right angle distribution. b Homogeneous distribution When PS-DSPI is used to measure 3D deformations, the three channels are enabled in turn by opening the shutters in front of each laser. Only one interferogram, generated by a pair of object and reference beams from a channel, is captured by the camera at a time. The implementation of a round of measurements using the three channels in turn yields three equations which express the mathematical relationship between the interferometric phases and image intensities. When SC-DSPI is used for measurement, the three shutters are opened together, resulting in three pairs of object and reference beams emerging on the camera sensor simultaneously. Each object beam- reference beam pair generates an interferogram, resulting in the simultaneous recording of three independent interferograms. The three interferograms are later separated in the frequency domain after a Fourier transform is performed on them. The interferometric phases are extracted from the separated interferograms after an inverse Fourier transform is performed. Phase determination using PS-DSPI The interferogram generated by the PS-DSPI can be expressed as $$ I\left(x,y\right)={I}_0\left(x,y\right)+B\left(x,y\right) \cos \left[\phi \left(x,y\right)+2\pi {f}_xx+2\pi {f}_yy\right], $$ where I(x, y) is the intensity distribution of the interferogram, I 0(x, y) is the background light, B(x, y) is a coefficient correlating with the contrast, ϕ(x, y) is the interferometric phase, f x and f y indicate the carrier frequencies which are introduced by the slightly deflected reference beams, and (x, y) indicates the two-dimensional distribution. The interferogram intensity I(x, y) is recorded by the camera, and the carrier frequencies f x and f y are determined by the incidence angle of the reference beam, but the three remaining variables in Eq. (1) are unknown, making the equation unsolvable. Additional conditions need to be added to resolve this problem. Typically, the additional condition is a series of artificial phase changes. The method to solve the equation by artificially changing the interferometric phase is known as phase shifting. This method can be further divided into temporal and spatial phase shifting. The temporal phase shifting, which changes the phase over time, is the dominant phase determination method in PS-DSPI due to its ease of use and ability to formulate high-quality phase maps. The number of steps and phase change intervals are multifarious [18]. For example, the popular four-step temporal phase shift changes the phase four times with an interval of π/2. As a result, four equations are obtained as $$ {I}_i\left(x,y\right)={I}_0\left(x,y\right)+B\left(x,y\right) \cos \left[\phi \left(x,y\right)+2\pi {f}_xx+2\pi {f}_yy+\left(i-1\right)\frac{\pi }{2}\right],\left(i=1,2,3,4\right). $$ Solving Eq. (2) for ϕ(x, y) results in the following expression: $$ \phi \left(x,y\right)={ \tan}^{\hbox{-} 1}\frac{I_4\left(x,y\right)-{I}_2\left(x,y\right)}{I_1\left(x,y\right)-{I}_3\left(x,y\right)}. $$ After the measuring target has been deformed, the phase shift is carried out again to determine the interferometric phase according to the deformed state. The phase difference is then determined by simply subtracting the phase before deformation from the phase after deformation. This is expressed as $$ \varDelta {\phi}_1\left(x,y\right)={\phi}_a\left(x,y\right)-{\phi}_b\left(x,y\right), $$ where ϕ a (x, y) and ϕ b (x, y) are the phase distributions after and before the deformation, respectively. The other two phase differences Δϕ 2(x, y) and Δϕ 3(x, y) are determined by performing the same procedure on the other channels. In the proposed universal optical setup, the phase shift is carried out by the PZT. A PZT elongation of λ/8, where λ is the laser wavelength, causes a phase shift of π/2, which is the amount required by the four-step temporal phase shift. Fine control of a well-calibrated PZT aids in the precise determination of the interferometric phase using PS-DSPI. Phase determination using SC-DSPI Due to the simultaneous recording of the three interferograms in the SC-DSPI mode, the image intensity is the sum of all interferograms, which is expressed by $$ {I}_s\left(x,y\right)={I}_{s0}\left(x,y\right)+{\displaystyle \sum_{i=1}^3{B}_i\left(x,y\right) \cos \left[{\phi}_i\left(x,y\right)+2\pi {f}_{ix}x+2\pi {f}_{iy}y\right]}, $$ where I s0(x, y) is the sum of the background lights. Aided by Euler's formula, Eq. (5) can be transformed to $$ {I}_s\left(x,y\right)={I}_{s0}\left(x,y\right)+{\displaystyle \sum_{i=1}^3\left[{C}_i\left(x,y\right){e}^{j2\pi \left({f}_{ix}x+{f}_{iy}y\right)}+{C}_i^{*}\left(x,y\right){e}^{-j2\pi \left({f}_{ix}x+{f}_{iy}y\right)}\right]}, $$ where C i (x, y) = B i (x, y)exp[jϕ(x, y)]/2, * denotes the complex conjugate. After a Fourier transform is performed, Eq. (6) is transformed to $$ F\left({f}_{\xi },{f}_{\eta}\right)=FT\left[{I}_s\left(x,y\right)\right]=\mathrm{A}\left({f}_{\xi },{f}_{\eta}\right)+{\displaystyle \sum_{i=1}^3\left[{P}_i\left({f}_{\xi }-{f}_{ix},{f}_{\eta }-{f}_{iy}\right)+{Q}_i\left({f}_{\xi }+{f}_{ix},{f}_{\eta }+{f}_{iy}\right)\right]}, $$ where FT denotes the operation of Fourier transform, (f ξ , f η ) are the coordinates in the frequency domain, and $$ \left\{\begin{array}{c}\hfill \mathrm{A}\left({f}_{\xi },{f}_{\eta}\right)=FT\left[{I}_{s0}\left(x,y\right)\right]\hfill \\ {}\hfill {P}_i\left({f}_{\xi }-{f}_{ix},{f}_{\eta }-{f}_{iy}\right)=FT\left[{C}_i\left(x,y\right){e}^{j2\pi \left({f}_{ix}x+{f}_{iy}y\right)}\right]\hfill \\ {}\hfill {Q}_i\left({f}_{\xi }+{f}_{ix},{f}_{\eta }+{f}_{iy}\right)=FT\left[{C}_i^{*}\left(x,y\right){e}^{-j2\pi \left({f}_{ix}x+{f}_{iy}y\right)}\right]\hfill \end{array}\right., $$ Eq. (7) shows there are a total of seven components in the frequency domain, where P i (f ξ − f ix , f η − f iy ) and Q i (f ξ + f ix , f η + f iy ) are three pairs of conjugate components and A(f x , f y ) represents the low-frequency background signal. The locations of P i (f ξ − f ix , f η − f iy ) and Q i (f ξ + f ix , f η + f iy ) are determined by the carrier frequencies f ix and f iy . All seven components can be well separated by fine adjustment of the incidence angles of the reference beams and the aperture in the universal optical setup. To intuitively describe the frequency spectrum obtained by SC-DSPI, Fig. 3 illustrates a distribution of the seven components that was generated by the proposed optical setup in an experiment. More information about the synchronous recording and separation of the multiple interferograms can be found in Refs. [19] and [20]. Frequency spectrum obtained by the SC-DSPI Since both P i (f ξ − f ix , f η − f iy ) and Q i (f ξ + f ix , f η + f iy ) contain the same interferometric phase, either of them can be used for phase extraction. This is realized by applying an inverse Fourier transform on the selected component and performing further calculations. For example, if P i (f ξ − f ix , f η − f iy ) is chosen, the phase distribution according to the first channel is $$ {\phi}_1\left(x,y\right)={ \tan}^{\hbox{-} 1}\frac{IM\left[{p}_1\left(x,y\right)\right]}{RE\left[{p}_1\left(x,y\right)\right]}, $$ where IM and RE denote imaginary and real parts of the complex number and $$ {p}_1\left(x,y\right)={\mathrm{FT}}^{\hbox{-} 1}\left[{P}_1\left({f}_{\xi }-{f}_{1x},{f}_{\eta }-{f}_{1y}\right)\right], $$ where FT‐ 1 is the inverse Fourier transform operation. The phases according to the other two channels, as well as the phases after deformation, are obtained by the same means. Finally, three individual phase difference distributions Δϕ 1(x, y), Δϕ 2(x, y) and Δϕ 3(x, y) are determined by subtracting the phases before deformation from the corresponding phases after deformation. Calculation of 3D deformations The relationship between the deformation and interferometric phase difference in PS-DSPI and SC-DSPI can be expressed by $$ \varDelta \phi \left(x,y\right)=\frac{2\pi }{\lambda}\overrightarrow{d}\left(x,y\right)\overrightarrow{s}\left(x,y\right), $$ where Δϕ(x, y) is the phase difference, \( \overrightarrow{d}\left(x,y\right) \) is the deformation vector, and \( \overrightarrow{s}\left(x,y\right) \) is the displacement sensitivity vector, which is dependent on the illumination angles. If the right-angle-distribution optical arrangement is used, Eq. (11) can be transformed to $$ \left\{\begin{array}{c}\hfill \varDelta {\phi}_1\left(x,y\right)=\frac{2\pi }{\lambda_1}\left[u\left(x,y\right) \sin \alpha +w\left(x,y\right)\left(1+ \cos \alpha \right)\right]\hfill \\ {}\hfill \varDelta {\phi}_2\left(x,y\right)=\frac{2\pi }{\lambda_2}\left[v\left(x,y\right) \sin \alpha +w\left(x,y\right)\left(1+ \cos \alpha \right)\right]\hfill \\ {}\hfill \varDelta {\phi}_3\left(x,y\right)=\frac{2\pi }{\lambda_3}\left[u\left(x,y\right) \sin \left(-\alpha \right)+w\left(x,y\right)\left(1+ \cos \alpha \right)\right]\hfill \end{array}\right., $$ where λ 1, λ 2, and λ 3 are the wavelengths of the three lasers; u(x, y), v(x, y), and w(x, y) are the three components of \( \overrightarrow{d}\left(x,y\right) \) in three dimensions, and α is the illumination angle. To simplify the calculation, all laser wavelengths are assumed to be the same. This assumption, used with Eq. (12), results in the following expressions for the three deformation vector components: $$ \left\{\begin{array}{c}\hfill u\left(x,y\right)=\frac{\lambda }{4\pi \sin \alpha}\left[\varDelta {\phi}_1\left(x,y\right)-\varDelta {\phi}_3\left(x,y\right)\right]\hfill \\ {}\hfill v\left(x,y\right)=\frac{\lambda }{4\pi \sin \alpha}\left[2\varDelta {\phi}_2\left(x,y\right)-\varDelta {\phi}_1\left(x,y\right)-\varDelta {\phi}_3\left(x,y\right)\right]\hfill \\ {}\hfill w\left(x,y\right)=\frac{\lambda }{4\pi \left(1+ \cos \alpha \right)}\left[\varDelta {\phi}_1\left(x,y\right)+\varDelta {\phi}_3\left(x,y\right)\right]\hfill \end{array}\right.. $$ For the homogeneous-distribution optical arrangement, Eq. (11) becomes $$ \left\{\begin{array}{c}\hfill \varDelta {\phi}_1\left(x,y\right)=\frac{2\pi }{\lambda_1}\left[u\left(x,y\right) \sin \alpha \cos {30}^{\bigcirc }+v\left(x,y\right) \sin \alpha \cos {60}^{\bigcirc }+w\left(x,y\right)\left(1+ \cos \alpha \right)\right]\hfill \\ {}\hfill \varDelta {\phi}_2\left(x,y\right)=\frac{2\pi }{\lambda_2}\left[v\left(x,y\right) \sin \alpha +w\left(x,y\right)\left(1+ \cos \alpha \right)\right]\hfill \\ {}\hfill \varDelta {\phi}_3\left(x,y\right)=\frac{2\pi }{\lambda_3}\left[u\left(x,y\right) \sin \left(-\alpha \right) \cos {30}^{\bigcirc }+v\left(x,y\right) \sin \left(-\alpha \right) \cos {60}^{\bigcirc }+w\left(x,y\right)\left(1+ \cos \alpha \right)\right]\hfill \end{array}\right.. $$ If the laser wavelengths are assumed to be the same, the deformation vector components have the following expression: $$ \left\{\begin{array}{c}\hfill u\left(x,y\right)=\frac{\sqrt{3}\lambda }{12\pi \sin \alpha}\left[3\varDelta {\phi}_1\left(x,y\right)-2\varDelta {\phi}_2\left(x,y\right)-\varDelta {\phi}_3\left(x,y\right)\right]\hfill \\ {}\hfill v\left(x,y\right)=\frac{\lambda }{4\pi \sin \alpha}\left[2\varDelta {\phi}_2\left(x,y\right)-\varDelta {\phi}_1\left(x,y\right)-\varDelta {\phi}_3\left(x,y\right)\right]\hfill \\ {}\hfill w\left(x,y\right)=\frac{\lambda }{4\pi \left(1+ \cos \alpha \right)}\left[\varDelta {\phi}_1\left(x,y\right)+\varDelta {\phi}_3\left(x,y\right)\right]\hfill \end{array}\right.. $$ The solutions of v(x, y) and w(x, y) are the same for both the right-angle-distribution and homogeneous-distribution types, but the solutions of u(x, y) are different. An experimental setup based on Fig. 1 and Fig. 2a was built to verify the validity of the presented universal optical setup. Three single-longitudinal-mode diode-pumped-solid-state lasers, all with a wavelength of 532 nm, were used as the light sources. A complementary-metal-oxide-semiconductor (CMOS) camera (CatchBEST Co. Ltd., MU3C500M-MRYYO, 500 Mega pixels, 14 fps) and an aspheric lens with a focus length of 100 mm were used to capture images. The location of the aspheric lens was carefully adjusted to obtain clear images. Three PZT chips (Thorlabs, Inc., PA4FE, 150 V, 2.5 μm travel) were used to actuate the phase shifts. The illumination angles were set to around 30°, and the incidence angles of the reference beams were carefully adjusted to guarantee that all components in the frequency domain were well separated. An object with a circular planar surface was used as the measuring target. Out-of-plane deformation w(x, y) was generated by applying a load to the center of the target back, while the in-plane deformations u(x, y) and v(x, y) were generated through rotation of the object surface. All motions were finely controlled using manual micrometer heads. The measuring area in the experiment was 60 mm × 40 mm. With self-developed programs, both the PS-DSPI and SC-DSPI modes were activated to measure the 3D deformations. The obtained phase differences corresponding to each channel are shown in Fig. 4. Figure 4(a1), (a2) and (a3) are the three phase differences obtained by PS-DSPI and Fig. 4(b1), (b2) and (b3) are the phase differences obtained by SC-DSPI. These phase differences are wrapped due to the arc tangent operation expressed in Eqs. (3) and (9). The real phase differences are finally obtained after image smoothing and phase unwrap operations are performed [21]. All of the phase maps in Fig. 4 present clear and regular patterns, illustrating the capability of both PS-DSPI and SC-DSPI to obtain high-quality phase maps. However, differences in image quality between the phase maps obtained by PS-DSPI and SC-DSPI can be found after partial enlargement of the original phase maps is processed. Local regions of the phase maps of the same size, corresponding to the first channel in the PS-DSPI and SC-DSPI modes respectively, are marked by yellow boxes in Fig. 4(a1) and (b1). The enlarged parts corresponding to the marked regions, as depicted in Fig. 5, clearly show that the speckle particles in the phase map obtained by PS-DSPI are much smaller than those in the phase map obtained by SC-DSPI. This means that the noise in the PS-DSPI phase maps can be filtered more easily than the SC-DSPI's phase maps, or, in other words, the phase smoothing process is performed more times in the SC-DSPI mode, leading to larger error being induced in the phase smoothing process. Consequently, PS-DSPI measurement is usually more accurate than SC-DSPI measurement. However, SC-DSPI reflects its value with its ability to perform dynamic and synchronous 3D deformation measurement. Original wrapped phase maps. a1 Phase 1 from PS-DSPI. a2 Phase 2 from PS-DSPI. a3 Phase 3 from PS-DSPI. b1 Phase 1 from SC-DSPI. b2 Phase 2 from SC-DSPI. b3 Phase 3 from SC-DSPI Comparison of phase maps. a Partial enlargement of phase map from PS-DSPI. b Partial enlargement of phase map from SC-DSPI The final 3D deformations shown in Fig. 6 were determined after the calculations described by Eq. (13) were performed. The horizontal coordinates represent the object surface plane and the vertical coordinates represent the deformation change. Figure 6(a1), (a2) and (a3) show the 3D deformations u, v and w obtained using PS-DSPI, respectively, while Fig. 6(b1), (b2) and (b3) show the 3D deformations obtained using SC-DSPI. The in-plane deformations u and v are orthogonal and vary linearly along the horizontal direction. These results indicate that the magnitude of the relative displacement of the circular surface caused by the rotation increases gradually and linearly. This is in accord with the results of theoretical analysis. The out-of-plane deformation w presents a distribution that decreases gradually from the periphery to the loading center. Though slight differences can be found between the results obtained by PS-DSPI and SC-DSPI due to the impossibility in duplicating the loading, this deformation data proves that reasonable results can be obtained by both PS-DSPI and SC-DSPI. Consequently, with the proposed universal optical setup, both PS-DSPI and SC-DSPI can be used to measure 3D deformation. Height maps for 3D deformations. a1 Deformation u obtained by PS-DSPI. a2 Deformation v obtained by PS-DSPI. a3 Deformation w obtained by PS-DSPI. b1 Deformation u obtained by SC-DSPI. b2 Deformation v obtained by SC-DSPI. b3 Deformation w obtained by SC-DSPI A universal optical setup, with a simple structure, for both PS-DSPI and SC-DSPI is introduced, aiding in the flexibility of full-field 3D deformation measurements. Experimental results show that clear phase maps with regular pattern and reasonable deformation measurement results can be obtained using this setup, verifying the validity of the presented method. Compared to traditional separate PS-DSPI and SC-DSPI setups, the performance of the proposed setup is not degraded. Moreover, its versatility improves the adaptive capacity relative to measuring target variability. Potential DSPI instruments, based on the proposed universal optical setup, will gain more applications and play an important role in practical engineering. CMOS: Complementary-metal-oxide-semiconductor DSPI: Digital speckle pattern interferometry PS-DSPI: Phase-shifting digital speckle pattern interferometry PZT: Piezoelectric transducer SC-DSPI: Spatial-carrier digital speckle pattern interferometry Kervran, Y., Sagazan, O.D., Crand, S., et al.: Microcrystalline silicon: Strain gauge and sensor arrays on flexible substrate for the measurement of high deformations. Sensors Actuators A Phys. 236(1), 273–280 (2015) Tiziani, H.J., Pedrini, G.: From speckle pattern photography to digital holographic interferometry. Appl. Opt. 52(1), 30–44 (2013) Gao, Z., Deng, Y., Duan, Y., et al.: Continual in-plane displacement measurement with temporal wavelet transform speckle pattern interferometry. Rev. Sci. Instrum. 83(1), 015107 (2012) Shao, X., Dai, X., He, X.: Noise robustness and parallel computation of the inverse compositional Gauss–Newton algorithm in digital image correlation. Opt. Laser. Eng. 71, 9–19 (2015) Zhu, R., Xie, H., Tang, M., et al.: Reconstruction of planar periodic structures based on Fourier analysis of moiré patterns. Opt. Eng. 54(4), 044102 (2015) Pedrini, G., Osten, W.: Time resolved digital holographic interferometry for investigations of dynamical events in mechanical components and biological tissues. Strain 43, 240–249 (2007) Solís, S.M., Santoyo, F.M., Hernández-Montes, M.S.: 3D displacement measurements of the tympanic membrane with digital holographic interferometry. Opt. Express 20(5), 5613–5621 (2012) Yang, L.X., Xie, X., Zhu, L., et al.: Review of electronic speckle pattern interferometry (ESPI) for three dimensional displacement measurement, Chin. J. Mech. Eng. 27(1), 1–13 (2014) Yang, L.X., Zhang, P., Liu, S., et al.: Measurement of strain distributions in mouse femora with 3D-digital speckle pattern interferometry, Opt. Laser. Eng. 45(8), 843–851 (2007) Bhaduri, B., Kothiyal, M.P., Mohan, N.K.: A comparative study of phase-shifting algorithms in digital speckle pattern interferometry. Optik 119(3), 147–152 (2008) Article ADS MATH Google Scholar Bhaduri, B., Mohan, N.K., Kothiyal, M.P.: Digital speckle pattern interferometry using spatial phase shifting: influence of intensity and phase gradients, J. Mod. Opt. 55(6), 861–876 (2008) Article MATH Google Scholar Zhu, L., Wang, Y., Xu, N., et al.: Real-time monitoring of phase maps of digital shearography. Opt. Eng. 52(10), 101902 (2013) Alvarez, A.S., Ibarra, M.H., Santoyo, F.M., et al.: Strain determination in bone sections with simultaneous 3D digital holographic interferometry. Opt. Laser. Eng. 57, 101–108 (2014) Wang, Y., Sun, J., Li, J., et al.: Synchronous measurement of three-dimensional deformations by multicamera digital speckle patterns interferometry. Opt. Eng. 55(9), 091408 (2016) Kulkarni, R., Rastogi, P.: Multiple phase derivative estimation using autoregressive modeling in holographic interferometry. Meas. Sci. Technol. 26(3), 035202 (2015) Tay, C.J., Quan, C., Chen, W.: Dynamic measurement by digital holographic interferometry based on complex phasor method. Opt. Laser Technol. 41(2), 172–180 (2009) Wu, S, Gao, X, Lv, Y, et al: Micro deformation measurement using temporal phase-shifting and spatial-carrier digital speckle pattern interferometry, SAE Technical Paper 2016-01-0415, (2016). doi:10.4271/2016-01-0415. Wu, S., Zhu, L., Feng, Q., et al.: Digital shearography with in situ phase shift calibration. Opt. Laser. Eng 50(9), 1260–1266 (2012) Xie, X., Chen, X., Li, J., et al.: Measurement of in-plane strain with dual beam spatial phase-shift digital shearography. Meas. Sci. Technol. 26(11), 115202 (2015) Xie, X., Xu, N., Sun, J., et al.: Simultaneous measurement of deformation and the first derivative with spatial phase-shift digital shearography. Opt. Commun. 286, 277–281 (2013) Wu, S., Zhu, L., Pan, S., et al.: Spatiotemporal three-dimensional phase unwrapping in digital speckle pattern interferometry. Opt. Lett. 41(5), 1050–1053 (2016) We express sincere thanks to Mr. Bernard Sia from the Optical Lab of Oakland University, who carefully and thoroughly read the manuscript and provided valuable criticisms. The research is supported by the National Natural Science Foundation of China (Grant No. 51275054), Beijing Municipal Commission of Education (Grant No. KM201511232004), and National Major Scientific Instrument and Equipment Development Project of China (Grant No. 2016YFF0101801). All authors have participated in the method discussion and result analysis. The experiments are conducted by SW and YF. All authors have read and agreed with the contents of the final manuscript. Sijin Wu received his PhD in 2012 and now a faculty member in Beijing Information Science and Technology University. His research interest focuses on optical metrology, such as digital speckle pattern interferometry and digital shearography. Mingli Dong received her PhD in physical electronics from Beijing Institute of Technology, China. She is a professor and dean in the School of Instrumentation Science and Opto-electronics Engineering at Beijing Information Science and Technology University in China. She has multidisciplinary research experiences including machine vision measurement technology, optical metrology, and biomedical detection technology. Yao Fang is a master student in Beijing Information Science and Technology University. Her research interest is spatial-carrier digital speckle pattern interferometry. Lianxiang Yang received his PhD in mechanical engineering from the University of Kassel, Germany, in 1997. He is the director of Optical Laboratory and a professor in the Department of Mechanical Engineering at Oakland University, Rochester, Michigan, USA. He has multidisciplinary research experiences including optical metrology, experimental strain/stress analysis, nondestructive testing, and 3D computer vision. He is a fellow of SPIE, a Changjiang scholar of Hefei University of Technology, and an adjunct professor of Beijing Information Science and Technology University. School of Instrumentation Science and Opto-electronics Engineering, Beijing Information Science and Technology University, Beijing, 100192, China Sijin Wu, Mingli Dong, Yao Fang & Lianxiang Yang Department of Mechanical Engineering, Oakland University, Rochester, MI, 48309, USA Lianxiang Yang Sijin Wu Mingli Dong Yao Fang Correspondence to Sijin Wu. Wu, S., Dong, M., Fang, Y. et al. Universal optical setup for phase-shifting and spatial-carrier digital speckle pattern interferometry. J. Eur. Opt. Soc.-Rapid Publ. 12, 14 (2016). https://doi.org/10.1186/s41476-016-0016-6 Phase shift Spatial carrier Deformation measurement Universal optical setup Digital holographic interferometry
CommonCrawl
Example Keywords: final fantasy -smartphones $13 Advanced search upcScavenger » Concepts In Physics » Wiki: Entropy ( Concepts In Physics ) Definitions and descript.. Function of state Reversible process Carnot cycle Classical thermodynamics Statistical mechanics Entropy of a system Second law of thermodyna.. The fundamental thermody.. Entropy in chemical ther.. Entropy balance equation.. Entropy change formulas .. Isothermal expansion or .. Approaches to understand.. Standard textbook defini.. Order and disorder Energy dispersal Relating entropy to ener.. Entropy and adiabatic ac.. Entropy in quantum mecha.. Experimental measurement.. Interdisciplinary applic.. Thermodynamic and statis.. The arrow of time Entropy in DNA sequences C O N T E N T S Entropy Heat State System Rank: 100% In statistical mechanics, entropy is an extensive property of a thermodynamic system. It is closely related to the number of microscopic configurations (known as microstates) that are consistent with the macroscopic quantities that characterize the system (such as its volume, pressure and temperature). Under the assumption that each microstate is equally probable, the entropy S is the natural logarithm of the number of microstates, multiplied by the Boltzmann constant . Formally (assuming equiprobable microstates), S = k_{\mathrm{B}} \ln \Omega . Macroscopic systems typically have a very large number of possible microscopic configurations. For example, the entropy of an ideal gas is proportional to the number of gas molecules . The number of molecules in twenty liters of gas at room temperature and atmospheric pressure is roughly (the Avogadro number). At equilibrium, each of the configurations can be regarded as random and equally likely. The second law of thermodynamics states that the entropy of an isolated system never decreases over time. Such systems spontaneously evolve towards thermodynamic equilibrium, the state with maximum entropy. Non-isolated systems may lose entropy, provided their environment's entropy increases by at least that amount so that the total entropy increases. Entropy is a function of the state function, so the change in entropy of a system is determined by its initial and final states. In the idealization that a process is reversible, the entropy does not change, while irreversible processes always increase the total entropy. Because it is determined by the number of random microstates, entropy is related to the amount of additional information needed to specify the exact physical state of a system, given its macroscopic specification. For this reason, it is often said that entropy is an expression of the disorder, or randomness of a system, or of the lack of information about it. The concept of entropy plays a central role in information theory. Boltzmann's constant, and therefore entropy, have dimensions of energy divided by temperature, which has a unit of per kelvin (J⋅K−1) in the International System of Units (or kg⋅m2⋅s−2⋅K−1 in terms of base units). The entropy of a substance is usually given as an intensive propertyeither entropy per unit mass (SI unit: J⋅K−1⋅kg−1) or entropy per unit amount of substance (SI unit: J⋅K−1⋅mol−1). The French mathematician Lazare Carnot proposed in his 1803 paper Fundamental Principles of Equilibrium and Movement that in any machine the accelerations and shocks of the moving parts represent losses of moment of activity. In other words, in any natural process there exists an inherent tendency towards the dissipation of useful energy. Building on this work, in 1824 Lazare's son Sadi Carnot published Reflections on the Motive Power of Fire which posited that in all heat-engines, whenever "caloric theory" (what is now known as heat) falls through a temperature difference, work or motive power can be produced from the actions of its fall from a hot to cold body. He made the analogy with that of how water falls in a water wheel. This was an early insight into the second law of thermodynamics. Carnot based his views of heat partially on the early 18th century "Newtonian hypothesis" that both heat and light were types of indestructible forms of matter, which are attracted and repelled by other matter, and partially on the contemporary views of Count Rumford who showed (1789) that heat could be created by friction as when cannon bores are machined. Carnot reasoned that if the body of the working substance, such as a body of steam, is returned to its original state at the end of a complete engine cycle, that "no change occurs in the condition of the working body". The first law of thermodynamics, deduced from the heat-friction experiments of James Joule in 1843, expresses the concept of energy, and its conservation in all processes; the first law, however, is unable to quantify the effects of friction and dissipation. In the 1850s and 1860s, German physicist Rudolf Clausius objected to the supposition that no change occurs in the working body, and gave this "change" a mathematical interpretation by questioning the nature of the inherent loss of usable heat when work is done, e.g. heat produced by friction. Clausius described entropy as the transformation-content, i.e. dissipative energy use, of a thermodynamic system or working body of chemical species during a change of state. This was in contrast to earlier views, based on the theories of Isaac Newton, that heat was an indestructible particle that had mass. Later, scientists such as Ludwig Boltzmann, Josiah Willard Gibbs, and James Clerk Maxwell gave entropy a statistical basis. In 1877 Boltzmann visualized a probabilistic way to measure the entropy of an ensemble of ideal gas particles, in which he defined entropy to be proportional to the natural logarithm of the number of microstates such a gas could occupy. Henceforth, the essential problem in statistical thermodynamics has been to determine the distribution of a given amount of energy E over N identical systems. Carathéodory linked entropy with a mathematical definition of irreversibility, in terms of trajectories and integrability. Definitions and descriptions There are two related definitions of entropy: the Thermodynamics definition and the statistical mechanics definition. Historically, the classical thermodynamics definition developed first. In the classical thermodynamics viewpoint, the system is composed of very large numbers of constituents (atoms, molecules) and the state of the system is described by the average thermodynamic properties of those constituents; the details of the system's constituents are not directly considered, but their behavior is described by macroscopically averaged properties, e.g. temperature, pressure, entropy, heat capacity. The early classical definition of the properties of the system assumed equilibrium. The classical thermodynamic definition of entropy has more recently been extended into the area of non-equilibrium thermodynamics. Later, the thermodynamic properties, including entropy, were given an alternative definition in terms of the statistics of the motions of the microscopic constituents of a system – modeled at first classically, e.g. Newtonian particles constituting a gas, and later quantum-mechanically (photons, phonons, spins, etc.). The statistical mechanics description of the behavior of a system is necessary as the definition of the properties of a system using classical thermodynamics becomes an increasingly unreliable method of predicting the final state of a system that is subject to some process. There are many thermodynamic properties that are functions of state. This means that at a particular thermodynamic state (which should not be confused with the microscopic state of a system), these properties have a certain value. Often, if two properties of the system are determined, then the state is determined and the other properties' values can also be determined. For instance, a quantity of gas at a particular temperature and pressure has its state fixed by those values and thus has a specific volume that is determined by those values. As another instance, a system composed of a pure substance of a single phase at a particular uniform temperature and pressure is determined (and is thus a particular state) and is at not only a particular volume but also at a particular entropy.J. A. McGovern, The fact that entropy is a function of state is one reason it is useful. In the Carnot cycle, the working fluid returns to the same state it had at the start of the cycle, hence the line integral of any state function, such as entropy, over this reversible cycle is zero. Entropy is conserved for a reversible process. A reversible process is one that does not deviate from thermodynamic equilibrium, while producing the maximum work. Any process which happens quickly enough to deviate from thermal equilibrium cannot be reversible. In these cases energy is lost to heat, total entropy increases, and the potential for maximum work to be done in the transition is also lost. More specifically, total entropy is conserved in a reversible process and not conserved in an irreversible process. For example, in the Carnot cycle, while the heat flow from the hot reservoir to the cold reservoir represents an increase in entropy, the work output, if reversibly and perfectly stored in some energy storage mechanism, represents a decrease in entropy that could be used to operate the heat engine in reverse and return to the previous state, thus the total entropy change is still zero at all times if the entire process is reversible. An irreversible process increases entropy. The concept of entropy arose from Rudolf Clausius's study of the Carnot cycle. (2019). 9781441914309, Springer. ISBN 9781441914309 In a Carnot cycle, heat is absorbed isothermally at temperature from a 'hot' reservoir and given up isothermally as heat to a 'cold' reservoir at . According to Carnot's principle, work can only be produced by the system when there is a temperature difference, and the work should be some function of the difference in temperature and the heat absorbed (). Carnot did not distinguish between and , since he was using the incorrect hypothesis that caloric theory was valid, and hence heat was conserved (the incorrect assumption that and were equal) when, in fact, is greater than . (1986). 9780936508160, Lilian Barber Press. ISBN 9780936508160 Through the efforts of Clausius and Lord Kelvin, it is now known that the maximum work that a heat engine can produce is the product of the Carnot efficiency and the heat absorbed from the hot reservoir: To derive the Carnot efficiency, which is (a number less than one), Kelvin had to evaluate the ratio of the work output to the heat absorbed during the isothermal expansion with the help of the Carnot–Clapeyron equation which contained an unknown function, known as the Carnot function. The possibility that the Carnot function could be the temperature as measured from a zero temperature, was suggested by Joule in a letter to Kelvin. This allowed Kelvin to establish his absolute temperature scale. (2019). 9780486417356, Dover Publications. ISBN 9780486417356 It is also known that the work produced by the system is the difference between the heat absorbed from the hot reservoir and the heat given up to the cold reservoir: Since the latter is valid over the entire cycle, this gave Clausius the hint that at each stage of the cycle, work and heat would not be equal, but rather their difference would be a state function that would vanish upon completion of the cycle. The state function was called the internal energy and it became the first law of thermodynamics. Rudolf Clausius (1867). 9781498167338, J. Van Voorst. . ISBN 9781498167338 Now equating () and () gives \frac{Q_\text{H}}{T_\text{H}}-\frac{Q_\text{C}}{T_\text{C}}=0 \frac{Q_\text{H}}{T_\text{H}}=\frac{Q_\text{C}}{T_\text{C}} This implies that there is a function of state which is conserved over a complete cycle of the Carnot cycle. Clausius called this state function entropy. One can see that entropy was discovered through mathematics rather than through laboratory results. It is a mathematical construct and has no easy physical analogy. This makes the concept somewhat obscure or abstract, akin to how the concept of energy arose. Clausius then asked what would happen if there should be less work produced by the system than that predicted by Carnot's principle. The right-hand side of the first equation would be the upper bound of the work output by the system, which would now be converted into an inequality W<\left(1-\frac{T_\text{C}}{T_\text{H}}\right)Q_\text{H} When the second equation is used to express the work as a difference in heats, we get Q_\text{H}-Q_\text{C}<\left(1-\frac{T_\text{C}}{T_\text{H}}\right)Q_\text{H} Q_\text{C}>\frac{T_\text{C}}{T_\text{H}}Q_\text{H} So more heat is given up to the cold reservoir than in the Carnot cycle. If we denote the entropies by for the two states, then the above inequality can be written as a decrease in the entropy S_\text{H}-S_\text{C}<0 S_\text{H} The entropy that leaves the system is greater than the entropy that enters the system, implying that some irreversible process prevents the cycle from producing the maximum amount of work predicted by the Carnot equation. The Carnot cycle and efficiency are useful because they define the upper bound of the possible work output and the efficiency of any classical thermodynamic system. Other cycles, such as the Otto cycle, Diesel cycle and Brayton cycle, can be analyzed from the standpoint of the Carnot cycle. Any machine or process that converts heat to work and is claimed to produce an efficiency greater than the Carnot efficiency is not viable because it violates the second law of thermodynamics. For very small numbers of particles in the system, statistical thermodynamics must be used. The efficiency of devices such as photovoltaic cells requires an analysis from the standpoint of quantum mechanics. The thermodynamic definition of entropy was developed in the early 1850s by Rudolf Clausius and essentially describes how to measure the entropy of an isolated system in thermodynamic equilibrium with its parts. Clausius created the term entropy as an extensive thermodynamic variable that was shown to be useful in characterizing the Carnot cycle. Heat transfer along the isotherm steps of the Carnot cycle was found to be proportional to the temperature of a system (known as its absolute temperature). This relationship was expressed in increments of entropy equal to the ratio of incremental heat transfer divided by temperature, which was found to vary in the thermodynamic cycle but eventually return to the same value at the end of every cycle. Thus it was found to be a function of state, specifically a thermodynamic state of the system. While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. Following the second law of thermodynamics, entropy of an isolated system always increases for irreversible processes. The difference between an isolated system and closed system is that heat may not flow to and from an isolated system, but heat flow to and from a closed system is possible. Nevertheless, for both closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur. According to the Clausius theorem, for a reversible cyclic process: \oint \frac{\delta Q_\text{rev}}{T} = 0. This means the line integral \int_L \frac{\delta Q_\text{rev}}{T} is State function. So we can define a state function called entropy, which satisfies d S = \frac{\delta Q_\text{rev}}{T}. Clausius coined the name (Entropie) for in 1865. He gives "transformational content" (Verwandlungsinhalt) as a synonym, paralleling his "thermal and ergonal content" (Wärme- und Werkinhalt) as the name of Internal energy, but preferring the term entropy as a close parallel of energy, formed by replacing the root of ἔργον "work" by that of τροπή]] "transformation". "Sucht man für S einen bezeichnenden Namen, so könnte man, ähnlich wie von der Gröſse U gesagt ist, sie sey der Wärme- und Werkinhalt des Körpers, von der Gröſse S sagen, sie sey der Verwandlungsinhalt des Körpers. Da ich es aber für besser halte, die Namen derartiger für die Wissenschaft wichtiger Gröſsen aus den alten Sprachen zu entnehmen, damit sie unverändert in allen neuen Sprachen angewandt werden können, so schlage ich vor, die Gröſse S nach dem griechischen Worte ἡ τροπὴ, die Verwandlung, die Entropie des Körpers zu nennen. Das Wort Entropie habei ich absichtlich dem Worte Energie möglichst ähnlich gebildet, denn die beiden Gröſsen, welche durch diese Worte benannt werden sollen, sind ihren physikalischen Bedeutungen nach einander so nahe verwandt, daſs eine gewisse Gleichartigkeit in der Benennung mir zweckmäſsig zu seyn scheint." (p. 390). To find the entropy difference between any two states of a system, the integral must be evaluated for some reversible path between the initial and final states. Peter Atkins (2019). 9780198700722, Oxford University Press. ISBN 9780198700722 Since entropy is a state function, the entropy change of the system for an irreversible path is the same as for a reversible path between the same two states. Thomas Engel (2019). 9780805338423, Pearson Benjamin Cummings. ISBN 9780805338423 However, the entropy change of the surroundings will be different. We can only obtain the change of entropy by integrating the above formula. To obtain the absolute value of the entropy, we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals. From a macroscopic perspective, in classical thermodynamics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only on the current state of the system, independent of how that state came to be achieved. In any process where the system gives up energy Δ E, and its entropy falls by Δ S, a quantity at least TR Δ S of that energy must be given up to the system's surroundings as unusable heat ( TR is the temperature of the system's external surroundings). Otherwise the process cannot go forward. In classical thermodynamics, the entropy of a system is defined only if it is in thermodynamic equilibrium. The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant factor which has since been known as Boltzmann's constant. In summary, the thermodynamic definition of entropy provides the experimental definition of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature. The interpretation of entropy in statistical mechanics is the measure of uncertainty, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over different possible microstates. In contrast to the macrostate, which characterizes plainly observable average quantities, a microstate specifies all molecular details about the system including the position and velocity of every molecule. The more such states available to the system with appreciable probability, the greater the entropy. In statistical mechanics, entropy is a measure of the number of ways in which a system may be arranged, often taken to be a measure of "disorder" (the higher the entropy, the higher the disorder). (2019). 9780071439534, McGraw-Hill Professional. ISBN 9780071439534 (2019). 9780760746165, Barnes & Noble. ISBN 9780760746165 This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) which could give rise to the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant. Specifically, entropy is a logarithmic measure of the number of states with significant probability of being occupied: S = -k_{\mathrm{B}}\sum_i p_i \log p_i, or, equivalently, the expected value of the logarithm of the probability that a microstate will be occupied S = -k_{\mathrm{B}} \operatorname{E}_i(\log p_i) where kB is the Boltzmann constant, equal to . The summation is over all the possible microstates of the system, and pi is the probability that the system is in the i-th microstate. Frigg, R. and Werndl, C. "Entropy – A Guide for the Perplexed". In Probabilities in Physics; Beisbart C. and Hartmann, S. Eds; Oxford University Press, Oxford, 2010 This definition assumes that the basis set of states has been picked so that there is no information on their relative phases. In a different basis set, the more general expression is S = -k_{\mathrm{B}} \operatorname{Tr}(\widehat{\rho} \log(\widehat{\rho})), where \widehat{\rho} is the density matrix, \operatorname{Tr} is trace and \log is the matrix logarithm. This density matrix formulation is not needed in cases of thermal equilibrium so long as the basis states are chosen to be energy eigenstates. For most practical purposes, this can be taken as the fundamental definition of entropy since all other formulas for S can be mathematically derived from it, but not vice versa. In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, the occupation of any microstate is assumed to be equally probable (i.e. p i = 1/Ω, where Ω is the number of microstates); this assumption is usually justified for an isolated system in equilibrium. (2019). 9780201380279, Addison Wesley. ISBN 9780201380279 Then the previous equation reduces to S = k_{\mathrm{B}} \log \Omega. In thermodynamics, such a system is one in which the volume, number of molecules, and internal energy are fixed (the microcanonical ensemble). The most general interpretation of entropy is as a measure of our uncertainty about a system. The equilibrium state of a system maximizes the entropy because we have lost all information about the initial conditions except for the conserved variables; maximizing the entropy maximizes our ignorance about the details of the system. This uncertainty is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model. The interpretative model has a central role in determining entropy. The qualifier "for a given set of macroscopic variables" above has deep implications: if two observers use different sets of macroscopic variables, they see different entropies. For example, if observer A uses the variables U, V and W, and observer B uses U, V, W, X, then, by changing X, observer B can cause an effect that looks like a violation of the second law of thermodynamics to observer A. In other words: the set of macroscopic variables one chooses must include everything that may change in the experiment, otherwise one might see decreasing entropy! Entropy can be defined for any with reversible dynamics and the detailed balance property. In Boltzmann's 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems of atoms and molecules in the gas phase, thus providing a measure for the entropy of classical thermodynamics. Entropy arises directly from the Carnot cycle. It can also be described as the reversible heat divided by temperature. Entropy is a fundamental function of state. In a thermodynamic system, pressure, density, and temperature tend to become uniform over time because the equilibrium state has higher probability (more possible of microstates) than any other state. As an example, for a glass of ice water in air at room temperature, the difference in temperature between a warm room (the surroundings) and cold glass of ice and water (the system and not part of the room), begins to equalize as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. In other words, the entropy of the room has decreased as some of its energy has been dispersed to the ice and water. However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in entropy. Thus, when the "universe" of the room and ice water system has reached a temperature equilibrium, the entropy change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the equalization has progressed. Thermodynamic entropy is a non-conserved state function that is of great importance in the sciences of physics and chemistry. (2019). 9780471661740, John Wiley & Sons. ISBN 9780471661740 Historically, the concept of entropy evolved to explain why some processes (permitted by conservation laws) occur spontaneously while their T-symmetry (also permitted by conservation laws) do not; systems tend to progress in the direction of increasing entropy. (1997). 9780935702996, Univ. Science Books. ISBN 9780935702996 Donald, T. Haynie (2019). 9780521791656, Cambridge University Press. ISBN 9780521791656 For , entropy never decreases. This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in entropy correspond to irreversible changes in a system, because some energy is expended as waste heat, limiting the amount of work a system can do. (2019). 9780198566779, Oxford University Press. ISBN 9780198566779 Joel de Rosnay (1979). 9780060110291, Harper & Row, Publishers. ISBN 9780060110291 Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Entropy can be calculated for a substance as the standard molar entropy from absolute zero (also known as absolute entropy) or as a difference in entropy from some other reference state which is defined as zero entropy. Entropy has the dimension of energy divided by temperature, which has a unit of per kelvin (J/K) in the International System of Units. While these are the same units as heat capacity, the two concepts are distinct. Entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. The second law of thermodynamics states that a closed system has entropy which may increase or otherwise remain constant. Chemical reactions cause changes in entropy and entropy plays an important role in determining in which direction a chemical reaction spontaneously proceeds. One dictionary definition of entropy is that it is "a measure of thermal energy per unit temperature that is not available for useful work". For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine. A special case of entropy increase, the entropy of mixing, occurs when two or more different substances are mixed. If the substances are at the same temperature and pressure, there is no net exchange of heat or work – the entropy change is entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle with mixing. The second law of thermodynamics requires that, in general, the total entropy of any system can't decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system tends not to decrease. It follows that heat can't flow from a colder body to a hotter body without the application of work (the imposition of order) to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion system. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient. It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, always makes a bigger contribution to the entropy of the environment than the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics. In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system's ability to do useful work. John Daintith (2019). 9780192806284, Oxford University Press. ISBN 9780192806284 The entropy change of a system at temperature T absorbing an infinitesimal amount of heat δq in a reversible way, is given by δq/ T. More explicitly, an energy is not available to do useful work, where TR is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy. Statistical mechanics demonstrates that entropy is governed by probability, thus allowing for a decrease in disorder even in an isolated system. Although this is possible, such an event has a small probability of occurring, making it unlikely. The applicability of a second law of thermodynamics is limited to systems which are near or in equilibrium state. At the same time, laws governing systems which are far from equilibrium are still debatable. One of the guiding principles for such systems is the maximum entropy production principle. It claims that non-equilibrium systems evolve such as to maximize its entropy production. The fundamental thermodynamic relation The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy U to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure p bears on the volume V as the only external parameter, this relation is: dU = T \, dS - p \, dV Since both internal energy and entropy are monotonic functions of temperature T, implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilibrium and then the entropy, pressure and temperature may not exist). The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities. Entropy in chemical thermodynamics Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system – the combination of a subsystem under study and its surroundings – increases during all spontaneous chemical and physical processes. The Clausius equation of δ qrev/ T = Δ S introduces the measurement of entropy change, Δ S. Entropy change describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems – always from hotter to cooler spontaneously. The thermodynamic entropy therefore has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI). Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: J⋅kg−1⋅K−1). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is called the molar entropy with a unit of J⋅mol−1⋅K−1. Thus, when one mole of substance at about is warmed by its surroundings to , the sum of the incremental values of qrev/ T constitute each element's or compound's standard molar entropy, an indicator of the amount of energy stored by a substance at . J. W. Moore (2019). 9780534422011, Brooks Cole. ISBN 9780534422011 Entropy change also measures the mixing of substances as a summation of their relative quantities in the final mixture. I. N. Levine (2019). 9780072318081, McGraw-Hill. ISBN 9780072318081 Entropy is equally essential in predicting the extent and direction of complex chemical reactions. For such applications, Δ S must be incorporated in an expression that includes both the system and its surroundings, Δ Suniverse = Δ Ssurroundings + Δ S system. This expression becomes, via some steps, the Gibbs free energy equation for reactants and products in the system: Δ G the = Δ H the − T Δ S the. Entropy balance equation for open systems In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system boundary. Flows of both heat (\dot{Q}) and work, i.e. \dot{W}_\text{S} (shaft work) and P( dV/ dt) (pressure-volume work), across the system boundaries, in general cause changes in the entropy of the system. Transfer as heat entails entropy transfer \dot{Q}/T, where T is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system. Late Nobel Laureate Max Born (2015). 9781298497406, BiblioLife. . ISBN 9781298497406 (1971). 9780122456015, Academic Press. ISBN 9780122456015 To derive a generalized entropy balanced equation, we start with the general balance equation for the change in any extensive quantity Θ in a thermodynamic system, a quantity that may be either conserved, such as energy, or non-conserved, such as entropy. The basic generic balance expression states that dΘ/dt, i.e. the rate of change of Θ in the system, equals the rate at which Θ enters the system at the boundaries, minus the rate at which Θ leaves the system across the system boundaries, plus the rate at which Θ is generated within the system. For an open thermodynamic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time t of the extensive quantity entropy S, the entropy balance equation is: Stanley, I. Sandler (1989). 9780471830504, John Wiley & Sons. ISBN 9780471830504 The overdots represent derivatives of the quantities with respect to time. \frac{dS}{dt} = \sum_{k=1}^K \dot{M}_k \hat{S}_k + \frac{\dot{Q}}{T} + \dot{S}_\text{gen} \sum_{k=1}^K \dot{M}_k \hat{S}_k = {} the net rate of entropy flow due to the flows of mass into and out of the system (where \hat{S} = {} entropy per unit mass). \frac{\dot{Q}}{T} = {} the rate of entropy flow due to the flow of heat across the system boundary. \dot{S}_\text{gen} = {} the rate of entropy production within the system. This entropy production arises from processes within the system, including chemical reactions, internal matter diffusion, internal heat transfer, and frictional effects such as viscosity occurring within the system from mechanical work transfer to or from the system. Note, also, that if there are multiple heat flows, the term \dot{Q}/T is replaced by \sum \dot{Q}_j/T_j, where \dot{Q}_j is the heat flow and T_j is the temperature at the jth heat flow port into the system. Entropy change formulas for simple processes For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas. Isothermal expansion or compression of an ideal gas For the expansion (or compression) of an ideal gas from an initial volume V_0 and pressure P_0 to a final volume V and pressure P at any constant temperature, the change in entropy is given by: \Delta S = n R \ln \frac{V}{V_0} = - n R \ln \frac{P}{P_0} . Here n is the number of moles of gas and R is the ideal gas constant. These equations also apply for expansion into a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain constant. For heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature T_0 to a final temperature T, the entropy change is \Delta S = n C_P \ln \frac{T}{T_0}. provided that the constant-pressure molar heat capacity (or specific heat) C P is constant and that no phase transition occurs in this temperature interval. Similarly at constant volume, the entropy change is \Delta S = n C_v \ln \frac{T}{T_0}, where the constant-volume heat capacity Cv is constant and there is no phase change. At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply. Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps – heating at constant volume and expansion at constant temperature. For an ideal gas, the total entropy change is \Delta S = nC_v \ln \frac{T}{T_0} + nR \ln \frac{V}{V_0}. Similarly if the temperature and pressure of an ideal gas both vary, \Delta S = nC_P \ln \frac{T}{T_0} - nR \ln \frac{P}{P_0}. Reversible occur at constant temperature and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy change divided by the thermodynamic temperature. For fusion (melting) of a solid to a liquid at the melting point Tm, the entropy of fusion is \Delta S_\text{fus} = \frac{\Delta H_\text{fus}}{T_\text{m}}. Similarly, for vaporization of a liquid to a gas at the boiling point Tb, the entropy of vaporization is \Delta S_\text{vap} = \frac{\Delta H_\text{vap}}{T_\text{b}}. Approaches to understanding entropy As a fundamental aspect of thermodynamics and physics, several different approaches to entropy beyond that of Clausius and Boltzmann are valid. Standard textbook definitions The following is a list of additional definitions of entropy from a collection of textbooks: a measure of energy dispersal at a specific temperature. a measure of disorder in the universe or of the availability of the energy in a system to do work. (1999). 9780684855783, Free Press. ISBN 9780684855783 a measure of a system's thermal energy per unit temperature that is unavailable for doing useful work. In Boltzmann's definition, entropy is a measure of the number of possible microscopic states (or microstates) of a system in thermodynamic equilibrium. Consistent with the Boltzmann definition, the second law of thermodynamics needs to be re-worded as such that entropy increases over time, though the underlying principle remains the same. Entropy has often been loosely associated with the amount of or Randomness, or of Chaos theory, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the status quo of the system and is a measure of "molecular disorder" and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies. (1988). 9780226075747, University of Chicago Press. ISBN 9780226075747 One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or permitted states, as contrasted with its forbidden states, the measure of the total amount of "disorder" in the system is given by: \text{Disorder}={C_\text{D}\over C_\text{I}}.\, Similarly, the total amount of "order" in the system is given by: \text{Order}=1-{C_\text{O}\over C_\text{I}}.\, In which CD is the "disorder" capacity of the system, which is the entropy of the parts contained in the permitted ensemble, CI is the "information" capacity of the system, an expression similar to Shannon's channel capacity, and CO is the "order" capacity of the system. The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature. Similar terms have been in use from early in the history of classical thermodynamics, and with the development of statistical thermodynamics and quantum theory, entropy changes have been described in terms of the mixing or "spreading" of the total energy of each constituent of a system over its particular quantized energy levels. Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students. As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics (compare discussion in next section). Physical chemist Peter Atkins, for example, who previously wrote of dispersal leading to a disordered state, now writes that "spontaneous changes are always accompanied by a dispersal of energy". Peter Atkins (1984). 9780716750048, Scientific American Library. ISBN 9780716750048 Relating entropy to energy usefulness Following on from the above, it is possible (in a thermal context) to regard lower entropy as an indicator or measure of the effectiveness or usefulness of a particular quantity of energy. This is because energy supplied at a higher temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at a lower temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a "loss" which can never be replaced. Thus, the fact that the entropy of the universe is steadily increasing, means that its total energy is becoming less useful: eventually, this will lead to the "heat death of the Universe". Entropy and adiabatic accessibility A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E.H.Lieb and Jakob Yngvason in 1999. This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 and the monograph by R. Giles. R. Giles (2019). 9781483184913, Elsevier Science. . ISBN 9781483184913 In the setting of Lieb and Yngvason one starts by picking, for a unit amount of the substance under consideration, two reference states X_0 and X_1 such that the latter is adiabatically accessible from the former but not vice versa. Defining the entropies of the reference states to be 0 and 1 respectively the entropy of a state X is defined as the largest number \lambda such that X is adiabatically accessible from a composite state consisting of an amount \lambda in the state X_1 and a complementary amount, (1-\lambda), in the state X_0. A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: It is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling. Entropy in quantum mechanics In quantum statistical mechanics, the concept of entropy was developed by John von Neumann and is generally referred to as "von Neumann entropy", S = - k_\mathrm{B}\operatorname{Tr} ( \rho \log \rho ) \! where ρ is the density matrix and Tr is the trace operator. This upholds the correspondence principle, because in the classical limit, when the phases between the basis states used for the classical probabilities are purely random, this expression is equivalent to the familiar classical definition of entropy, S = - k_\mathrm{B}\sum_i p_i \, \log \, p_i, i.e. in such a basis the density matrix is diagonal. Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this work a theory of measurement, where the usual notion of wave function collapse is described as an irreversible process (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the quantum domain. When viewed in terms of information theory, the entropy state function is simply the amount of information (in the Shannon sense) that would be needed to specify the full microstate of the system. This is left unspecified by the macroscopic description. In information theory, entropy is the measure of the amount of information that is missing before reception and is sometimes referred to as Shannon entropy. Roger Balian (2019). 9783764371166, Birkhäuser. ISBN 9783764371166 Shannon entropy is a broad and general concept which finds applications in information theory as well as thermodynamics. It was originally devised by Claude Shannon in 1948 to study the amount of information in a transmitted message. The definition of the information entropy is, however, quite general, and is expressed in terms of a discrete set of probabilities ''pi so that H(X) = -\sum_{i=1}^n p(x_i) \log p(x_i). In the case of transmitted messages, these probabilities were the probabilities that a particular message was actually transmitted, and the entropy of the message system was a measure of the average amount of information in a message. For the case of equal probabilities (i.e. each message is equally probable), the Shannon entropy (in bits) is just the number of yes/no questions needed to determine the content of the message. The question of the link between information entropy and thermodynamic entropy is a debated topic. While most authors argue that there is a link between the two, Leon Brillouin (2019). 9780486439181 ISBN 9780486439181 Nicholas Georgescu-Roegen (1971). 9780674257818, Harvard University Press. ISBN 9780674257818 Jing Chen (2019). 9789812563231, World Scientific. ISBN 9789812563231 (2019). 9789812832269, World Scientific. ISBN 9789812832269 a few argue that they have nothing to do with each other. The expressions for the two entropies are similar. If W is the number of microstates that can yield a given macrostate, and each microstate has the same a priori probability, then that probability is . The Shannon entropy (in nats) is: H = -\sum_{i=1}^W p \log (p)= \log (W) and if entropy is measured in units of k per nat, then the entropy is given by: H = k \log (W) which is the famous Boltzmann entropy formula when k is Boltzmann's constant, which may be interpreted as the thermodynamic entropy per nat. There are many ways of demonstrating the equivalence of "information entropy" and "physics entropy", that is, the equivalence of "Shannon entropy" and "Boltzmann entropy". Nevertheless, some authors argue for dropping the word entropy for the H function of information theory and using Shannon's other term "uncertainty" instead.Schneider, Tom, DELILA system (Deoxyribonucleic acid Library Language), (Information Theory Analysis of binding sites), Laboratory of Mathematical Biology, National Cancer Institute, Frederick, MD Experimental measurement of entropy Entropy of a substance can be measured, although in an indirect way. The measurement uses the definition of temperature in terms of entropy, while limiting energy exchange to heat (dU \rightarrow dQ). T := \left(\frac{\partial U}{\partial S}\right)_{V,N} \Rightarrow \cdots \Rightarrow \; dS=dQ/T The resulting relation describes how entropy changes dS when a small amount of energy dQ is introduced into the system at a certain temperature T. The process of measurement goes as follows. First, a sample of the substance is cooled as close to absolute zero as possible. At such temperatures, the entropy approaches zerodue to the definition of temperature. Then, small amounts of heat are introduced into the sample and the change in temperature is recorded, until the temperature reaches a desired value (usually 25°C). The obtained data allows the user to integrate the equation above, yielding the absolute value of entropy of the substance at the final temperature. This value of entropy is called calorimetric entropy. Interdisciplinary applications of entropy Although the concept of entropy was originally a thermodynamic construct, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution. Daniel, R. Brooks (1988). 9780226075747, University of Chicago Press. ISBN 9780226075747 John Avery (2019). 9789812383990, World Scientific. ISBN 9789812383990 Hubert, P. Yockey (2019). 9780521802932, Cambridge University Press. ISBN 9780521802932 For instance, an entropic argument has been recently proposed for explaining the preference of cave spiders in choosing a suitable area for laying their eggs. Thermodynamic and statistical mechanics concepts Entropy unit – a non-S.I. unit of thermodynamic entropy, usually denoted "e.u." and equal to one calorie per kelvin per mole, or 4.184 per kelvin per mole. Gibbs entropy – the usual statistical mechanical entropy of a thermodynamic system. Boltzmann entropy – a type of Gibbs entropy, which neglects internal statistical correlations in the overall particle distribution. Tsallis entropy – a generalization of the standard Boltzmann–Gibbs entropy. Standard molar entropy – is the entropy content of one mole of substance, under conditions of standard temperature and pressure. Residual entropy – the entropy present after a substance is cooled arbitrarily close to absolute zero. Entropy of mixing – the change in the entropy when two different chemical substances or components are mixed. Loop entropy – is the entropy lost upon bringing together two residues of a polymer within a prescribed distance. Conformational entropy – is the entropy associated with the physical arrangement of a polymer chain that assumes a compact or globular protein state in solution. Entropic force – a microscopic force or reaction tendency related to system organization changes, molecular frictional considerations, and statistical variations. Free entropy – an entropic thermodynamic potential analogous to the free energy. Entropic explosion – an explosion in which the reactants undergo a large change in volume without releasing a large amount of heat. Entropy change – a change in entropy dS between two equilibrium states is given by the heat transferred dQrev divided by the absolute temperature T of the system in this interval. Sackur–Tetrode entropy – the entropy of a monatomic classical ideal gas determined via quantum considerations. Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases in large systems over significant periods of time. Hence, from this perspective, entropy measurement is thought of as a clock in these conditions. Entropy has been proven useful in the analysis of DNA sequences. Many entropy-based measures have been shown to distinguish between different structural regions of the genome, differentiate between coding and non-coding regions of DNA and can also be applied for the recreation of evolutionary trees by determining the evolutionary distance between different species. Since a finite universe is an isolated system, the second law of thermodynamics states that its total entropy is continually increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy so that no more work can be extracted from any source. If the universe can be considered to have generally increasing entropy, then – as Roger Penrose has pointed out – gravity plays an important role in the increase because gravity causes dispersed matter to accumulate into stars, which collapse eventually into . The entropy of a black hole is proportional to the surface area of the black hole's event horizon. Christian, H. von Baeyer (2019). 9780674013872, Harvard University Press. ISBN 9780674013872 Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible entropy of any object of equal size. This makes them likely end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the escape of energy from black holes might be possible due to quantum activity (see Hawking radiation). The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer. Eric J. Chaisson (2019). 9780674003422, Harvard University Press. ISBN 9780674003422 (2019). 9781107027251, Cambridge University Press. ISBN 9781107027251 This results in an "entropy gap" pushing the system further away from the posited heat death equilibrium. Victor J. Stenger (2019). 9781591024811, Prometheus Books. ISBN 9781591024811 Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of large-scale thermodynamics extremely difficult. Benjamin Gal-Or (1987). 9780387965260, Springer Verlag. ISBN 9780387965260 Current theories suggest the entropy gap to have been originally opened up by the early rapid exponential expansion of the universe. (in honor of John Wheeler's 90th birthday) Romanian American economist Nicholas Georgescu-Roegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process. Nicholas Georgescu-Roegen (1971). 9780674257801, Harvard University Press. . ISBN 9780674257801 Due to Georgescu-Roegen's work, the laws of thermodynamics now form an integral part of the ecological economics school. (2019). 9781597266819, Island Press. . ISBN 9781597266819 Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics. John E.J. Schmitz (2019). 9780815515371, William Andrew Publishing. . ISBN 9780815515371 In economics, Georgescu-Roegen's work has generated the term 'entropy pessimism'. Since the 1990s, leading ecological economist and steady-state theorist Herman Daly – a student of Georgescu-Roegen – has been the economics profession's most influential proponent of the entropy pessimism position. In Hermeneutics, Arianna Béatrice Fabbricatore has used the term entropy relying on the works of Umberto Eco,Umberto Eco, Opera aperta. Forma e indeterminazione nelle poetiche contemporanee, Bompiani 2013 to identify and assess the loss of meaning between the verbal description of dance and the choreotext (the moving silk engaged by the dancer when he puts into action the choreographic writing)Arianna Beatrice Fabbricatore. (2017). La Querelle des Pantomimes. Danse, culture et société dans l'Europe des Lumières. Rennes: Presses universitaires de Rennes. generated by inter-semiotic translation operationsArianna Beatrice Fabbricatore. (2018). L'action dans le texte. Pour une approche herméneutique du Trattato teorico-prattico di Ballo (1779) de G. Magri. Ressource, Pantin, CN D... This use is linked to the notions of logotext and choreotext. In the transition from logotext to choreotext it is possible to identify two typologies of entropy: the first, called "natural", is related to the uniqueness of the performative act and its ephemeral character. The second is caused by "voids" more or less important in the logotext ( i.e. the verbal text that reflects the action danced ). Autocatalytic reactions and order creation Brownian ratchet Clausius–Duhem inequality Configuration entropy Departure function Entropic force Entropic value at risk Entropy (information theory) Entropy (computing) Entropy and life Entropy (order and disorder) Entropy rate Entropy production Extropy Geometrical frustration Harmonic entropy Heat death of the universe Info-metrics Laws of thermodynamics Multiplicity function Negentropy (negative entropy) Orders of magnitude (entropy) Phase space Principle of maximum entropy Stirling's formula Thermodynamic databases for pure substances Thermodynamic potential Thermodynamic equilibrium Wavelet entropy Gerhard Adam (1992). 9783528333119, Vieweg, Braunschweig. ISBN 9783528333119 Baierlein, Ralph (2019). 9780521658386, Cambridge University Press. ISBN 9780521658386 Arieh Arieh Ben-Naim (2019). 9789812700551, World Scientific. ISBN 9789812700551 Herbert, B Callen (2019). 9780471862567, John Wiley and Sons. ISBN 9780471862567 Chang, Raymond (1998). 9780071152211, McGraw Hill. ISBN 9780071152211 John, D. Cutnell (1998). 9780471191131, John Wiley and Sons, Inc.. ISBN 9780471191131 J. S. Dugdale (1996). 9780748405695, Taylor and Francis (UK); CRC (US). ISBN 9780748405695 Enrico Fermi (2019). 9780486603612, Prentice Hall. ISBN 9780486603612 (1993). 9780674753259, Harvard University Press. ISBN 9780674753259 E.P. Gyftopoulos (2019). 9780486439327, Dover. ISBN 9780486439327 Wassim M. Haddad (2019). 9780691123271, Princeton University Press. ISBN 9780691123271 Herbert Kroemer (1980). 9780716710882, W. H. Freeman Company. ISBN 9780716710882 Lambert, Frank L.; entropysite.oxy.edu Harald J. W. Müller-Kirsten (2019). 9789814449533, World Scientific. ISBN 9789814449533 Roger Penrose (2019). 9780679454434, A. A. Knopf. ISBN 9780679454434 F. Reif (1965). 9780070518001, McGraw-Hill. ISBN 9780070518001 Schroeder, Daniel V. (2019). 9780201380279, New York: Addison Wesley Longman. ISBN 9780201380279 Raymond, A. Serway (1992). 9780030960260, Saunders Golden Subburst Series. ISBN 9780030960260 Spirax-Sarco Limited, Entropy – A Basic Understanding A primer on entropy tables for steam engineering (1998). 9780679433422, Random House. ISBN 9780679433422 Entropy and the Second Law of Thermodynamics – an A-level physics lecture with detailed derivation of entropy based on Carnot cycle Khan Academy: entropy lectures, part of Chemistry playlist Proof: S (or Entropy) is a valid state variable Thermodynamic Entropy Definition Clarification Reconciling Thermodynamic and State Definitions of Entropy Entropy Intuition More on Entropy The Second Law of Thermodynamics and Entropy – Yale OYC lecture, part of Fundamentals of Physics I (PHYS 200) Entropy and the Clausius inequality MIT OCW lecture, part of 5.60 Thermodynamics & Kinetics, Spring 2008 The Discovery of Entropy by Adam Shulman. Hour-long video, January 2013. "Entropy" at Scholarpedia Categories: Concepts In Physics, Philosophy Of Thermal And Statistical Physics, State Functions, Asymmetry 10/10 Page Rank 5 Page Refs
CommonCrawl
Trying to understand this Quicksort Correctness proof This proof is a proof by induction, and goes as follows: P(n) is the assertion that "Quicksort correctly sorts every input array of length n." Base case: every input array of length 1 is already sorted (P(1) holds) Inductive step: fix n => 2. Fix some input array of length n. Need to show: if P(k) holds for all k < n, then P(n) holds as well He then draws an array A partitioned around some pivot p. So he draws p, and calls the part of the array that is < p as the 1st part, and the part that is > p is the second part. The length of part 1 = k1, and the length of part 2 is k2. By the correctness proof of the Partition subroutine (proved earlier), the pivot p winds up in the correct position. By inductive hypothesis: 1st, 2nd parts get sorted correctly by recursive calls. (Using P(K1),P(k2)) So: after recursive calls, entire array is correctly sorted. My confusion: I have a lot of problem seeing exactly how this proves the correctness of it. So we assume that P(k) does indeed hold for all natural numbers k < n. Most of the induction proofs I had seen so far go something like: Prove base case, and show that P(n) => P(n+1). They usually also involved some sort of algebraic manipulation. This proof seems very different, and I don't understand how to apply the concept of Induction to it. I can somewhat reason that the correctness of the Partition subroutine is the key. So is the reasoning for its correctness as follows: We know that each recursive call, it will partition the array around a pivot. This pivot will then be in its rightful position. Then each subarray will be further partitioned around a pivot, and that pivot will then be in its rightful position. This goes on and on until you get an subarray of length 1, which is trivially sorted. But then we're not assuming that P(k) holds for all k < n....we are actually SHOWING it does (since the Partition subroutine will always place one element in its rightful position.) Are we not assuming that P(k) holds for all k correctness-proof induction quicksort FrostyStraw FrostyStrawFrostyStraw $\begingroup$ What is "QUE"? Did you mean "QED"? (the Latin Quod Erat Demonstrandum which does not contain any word starting for U) $\endgroup$ – Bakuriu $\begingroup$ I did indeed mean QED. I guess my confusion led to me writing "WHAT?" in spanish $\endgroup$ – FrostyStraw We are indeed assuming $P(k)$ holds for all $k < n$. This is a generalization of the "From $P(n-1)$, we prove $P(n)$" style of proof you're familiar with. The proof you describe is known as the principle of strong mathematical induction and has the form Suppose that $P(n)$ is a predicate defined on $n\in \{1, 2, \dotsc\}$. If we can show that $P(1)$ is true, and $(\forall k < n \;[P(k)])\Longrightarrow P(n)$ Then $P(n)$ is true for all integers $n\ge 1$. In the proof to which you refer, that's exactly what's going on. To use quicksort to sort an array of size $n$, we partition it into three pieces: the first $k$ subarray, the pivot (which will be in its correct place), and the remaining subarray of size $n-k-1$. By the way partition works, every element in the first subarray will be less than or equal to the pivot and every element in the other subarray will be greater than or equal to the pivot, so when we recursively sort the first and last subarrays, we will wind up having sorted the entire array. We show this is correct by strong induction: since the first subarray has $k<n$ elements, we can assume by induction that it will be correctly sorted. Since the second subarray has $n-k-1<n$ elements, we can assume that it will be correctly sorted. Thus, putting all the pieces together, we will wind up having sorted the array. Rick DeckerRick Decker $\begingroup$ The cool part about the principle of strong induction is that the base case $P(1)$ is not necessary! If we take $n=1$ in the induction step, then the antecedent $\forall k<1,P(k)$ is vacuous, so we have $P(1)$ unconditionally. $\endgroup$ – Mario Carneiro $\begingroup$ Okay so...to be clear...We ASSUME P(k) is true for all k < n. And the way we SHOW that P(k) ==> P(n) (for all k < n) is through the combination of knowing that the pivot will for sure be in its correct position, and through the assumption (the inductive hypothesis) that the left and right subarrays are also sorted. Combine that with the base case (in which k = 1 < n), and the proof is complete? $\endgroup$ $\begingroup$ well I guess it wouldn't be enough to know that the pivot is in its correct position, but also that the right subarray is all greater than the pivot and the left one is all less than $\endgroup$ $\begingroup$ @FrostyStraw It's Chinese whisperes. $\endgroup$ – Raphael ♦ $\begingroup$ @FrostyStraw Hehe, I meant the proof strategy. :) $\endgroup$ This proof uses the principle of complete induction: Suppose that: Base case: $P(1)$ Step: For every $n > 1$, if $P(1),\ldots,P(n-1)$ hold (induction hypothesis) then $P(n)$ also holds. Then $P(n)$ holds for all $n \geq 1$. You can prove this principle using the usual induction principle by considering the property $$ Q(m) \Leftrightarrow P(1) \text{ and } P(2) \text{ and } \cdots \text{ and } P(m) $$ I leave you the details. Now, let's use complete induction to prove that the following version of Quicksort sorts its input correctly: Quicksort(A, n) if n = 1 then: let X[1...x] consist of all elements of A[2],...,A[n] which are at most A[1] let Y[1...y] consist of all elements of A[2],...,A[n] which are larger than A[1] call Quicksort(X, x) call Quicksort(Y, y) set A to the concatenation of X, A[1], Y Here A[1],...,A[n] is the input array, and n is its length. The statement that we want to prove is as follows: Let $A$ be an array of length $n \geq 1$. Denote the contents of $A$ after calling Quicksort by $B$. Then: Quicksort terminates on $A$. There is a permutation $\pi_1,\ldots,\pi_n$ of $1,\ldots,n$ such that $B[i] = A[\pi_i]$. $B[1] \leq B[2] \leq \cdots \leq B[n]$. I will only prove the third property, leaving the rest to you. We thus let $P(n)$ be the following statement: If $A$ is an array of length $n \geq 1$, and $B$ is its contents after running Quicksort(A, n), then $B[1] \leq B[2] \leq \cdots \leq B[n]$. The proof is by complete induction on $n$. If $n = 1$ then there is nothing to prove so suppose that $n > 1$. Let $X,x,Y,y$ be as in the procedure Quicksort. Since $x,y < n$, the induction hypothesis shows that $$ X[1] \leq X[2] \leq \cdots \leq X[x] \\ Y[1] \leq Y[2] \leq \cdots \leq Y[y] $$ Moreover, from the way we formed arrays $X$ and $Y$, it follows that $X[x] \leq A[1] < Y[1]$. Thus $$ X[1] \leq \cdots \leq X[x] \leq A[1] < Y[1] \leq \cdots \leq Y[y]. $$ It follows immediately that $B[1] \leq \cdots \leq B[n]$. Thus $P(n)$ holds. The missing part of the argument is transitivity of '<' - i.e the property that if a < b and b < c, then a < c. The proof that the final array is sorted goes something like this: Let A[i], A[j] be elements of the array post-sort, where i < j. Then A[i] < A[j] follows from one of the following placement cases (and there are no other cases): (a) i and j are in the first partition - A[i] < A[j] follows by recursion/induction. (b) i and j are in the second partition - A[i] < A[j] follows by recursion/induction. (c) i is in the first partition and j is the index of the pivot - A[i] < A[j] follows by proof of partition procedure. (c) i is the index of the pivot and j is in the second partition - A[i] < A[j] follows by proof of partition procedure. (e) i is in first partition and j is in second partition - then by partition procedure, A[i] < pivot, and pivot < A[j]. So by transitivity, A[i] < A[j]. PMarPMar Not the answer you're looking for? Browse other questions tagged correctness-proof induction quicksort or ask your own question. Proof of QuickSort algorithm correctness How does Hoare's quicksort work, even if the final position of the pivot after partition() is not what its position is in the sorted array? Array contains elements that differ by K correctness proof Understanding how quicksort operates Worst Case Scenario for Quicksort algorithm with pivot element n/2 prove correctness of in-order tree traversal subroutine Understanding the upper bound proof for quick sort
CommonCrawl
Over 3 years (120) Physics And Astronomy (43) Materials Research (33) Area Studies (3) Statistics and Probability (3) MRS Online Proceedings Library Archive (28) Parasitology (12) Journal of Fluid Mechanics (4) Epidemiology & Infection (3) Journal of Developmental Origins of Health and Disease (3) Experimental Agriculture (2) Glasgow Mathematical Journal (2) Infection Control & Hospital Epidemiology (2) Journal of Dairy Research (2) Laser and Particle Beams (2) Mathematical Proceedings of the Cambridge Philosophical Society (2) Mineralogical Magazine (2) Prehospital and Disaster Medicine (2) Proceedings of the Nutrition Society (2) Canadian Mathematical Bulletin (1) Materials Research Society (33) Nestle Foundation - enLINK (3) Ryan Test (3) Mineralogical Society (2) Society for Healthcare Epidemiology of America (SHEA) (2) World Association for Disaster and Emergency Medicine (2) BLI Birdlife International (1) Canadian Mathematical Society (1) International Neuropsychological Society INS (1) International Soc for Twin Studies (1) Royal Aeronautical Society (1) Social Policy Association (1) test society (1) London Mathematical Society Lecture Note Series (14) Cambridge Environmental Chemistry Series (1) Cambridge Handbooks in Psychology (1) Cambridge Handbooks (1) Cambridge Handbooks of Psychology (1) Cambridge Histories (1) Cambridge Histories - Literature (1) Density Measurement of Thin Sputtered Carbon Films G. L. Gorman, M.-M. Chen, G. Castillo, R. C. C. Perera Journal: Advances in X-ray Analysis / Volume 32 / 1988 The densities of sputtered thin carbon films have been determined using a novel X-ray technique. This nondestructive method involves the measurement of the transmitivity of a characteristic soft (low energy) X-ray line through the carbon film, and using the established equation I1 = I0eμpt where I1/I0 is the transmitivity, fi the photo absorption cross section, t the independently measured thickness, the density p can be easily solved for. This paper demonstrates the feasibility of using this simple technique to measure densities of carbon films as thin as 300 Å, which is of tremendous practical interest as carbon films on this order of thickness are used extensively as abrasive and corrosive barriers (overcoats) for metallic recording media disks. The dependence of the density upon film thickness for a fixed processing condition is presented, as also its dependence (for a fixed thickness) upon different processing parameters (e.g., sputtering gas pressure and target power). The trends noted in this study indicate that the sputtering gas pressure plays the most important role, changing the film density from 2.4gm/cm3 at 1 mTorr to 1.5gm/cm3 at 30 mTorr for 1000 Å thick films. Invasive pneumococcal disease in hospitalised children from Lima, Peru before and after introduction of the 7-valent conjugated vaccine A. Luna-Muschi, F. Castillo-Tokumori, M. P. Deza, E. H. Mercado, M. Egoavil, K. Sedano, M. E. Castillo, I. Reyes, E. Chaparro, R. Hernández, W. Silva, O. Del Aguila, F. Campos, A. Saenz, T. J. Ochoa Journal: Epidemiology & Infection / Volume 147 / 2019 Published online by Cambridge University Press: 22 February 2019, e91 The objective of this study was to determine the serotype distribution and antibiotic resistance of invasive pneumococcal disease (IPD) strains in children from Lima, Peru, before and after the introduction of the 7-valent pneumococcal conjugate vaccine (PCV7), which was introduced in the national immunisation program on 2009. We conducted a prospective, multicentre, passive surveillance IPD study during 2006–2008 and 2009–2011, before and right after the introduction of PCV7 in Peru. The study was performed in 11 hospitals and five private laboratories in Lima, Peru, in patients <18 years old, with sterile site cultures yielding Streptococcus pneumoniae. In total 159 S. pneumoniae isolates were recovered. There was a decrease in the incidence of IPD in children <2 years old after the introduction of PCV7 (18.4/100 000 vs. 5.1/100 000, P = 0.004). Meningitis cases decreased significantly in the second period (P = 0.036) as well as the overall case fatality rate (P = 0.025), including a decreased case fatality rate of pneumonia (16.3% to 0%, P = 0.04). PCV7 serotypes showed a downward trend. Vaccine-preventable serotypes caused 78.9% of IPD cases, mainly 14, 6B, 5, 19F and 23F. A non-significant increase in erythromycin resistance was reported. Our findings suggest that the introduction of PCV7 led to a significant decrease of IPD in children under 2 years old and in the overall case fatality rate. Study of 1,15-pentadecanediol by Powder X-ray Diffraction and Polarized Light Microscopy G. Luis-Raya, M. Ramirez-Cardona, M.P. Falcon-Leon, A.I. Martinez-Perez, F. Gonzalez-Hernandez, E.G. Perez-Perez, A. Silva-Castillo, E. E. Vera-Cardenas Numerical Analysis Receiving/Transmitting Mechanisms of ZnO/Ag Nanoantennas A. Garcia-Barrientos, F. R. Castillo-Soria, M. A. Cardenas-Juarez, V. I. Rodriguez-Abdala, F. J. Gonzalez, J. E. Sanchez CdTe, ZnTe and Cd1-XZnXTe Nanolayers Grown by Atomic Layer Deposition on GaSb and GaAs (001) Oriented Substrates J. E. Flores-Mena, R. S. Castillo-Ojeda, J. Díaz-Reyes, M. Galván-Arellano, F. de Anda-Salazar Journal: MRS Advances / Volume 2 / Issue 50 / 2017 Does growth restriction increase the vulnerability to acute ventilation-induced brain injury in newborn lambs? Implications for future health and disease B. J. Allison, S. B. Hooper, E. Coia, G. Jenkin, A. Malhotra, V. Zahra, A. Sehgal, M. Kluckow, A. W. Gill, T. Yawno, G. R. Polglase, M. Castillo-Melendez, S. L. Miller Journal: Journal of Developmental Origins of Health and Disease / Volume 8 / Issue 5 / October 2017 Fetal growth restriction (FGR) and preterm birth are frequent co-morbidities, both are independent risks for brain injury. However, few studies have examined the mechanisms by which preterm FGR increases the risk of adverse neurological outcomes. We aimed to determine the effects of prematurity and mechanical ventilation (VENT) on the brain of FGR and appropriately grown (AG, control) lambs. We hypothesized that FGR preterm lambs are more vulnerable to ventilation-induced acute brain injury. FGR was surgically induced in fetal sheep (0.7 gestation) by ligation of a single umbilical artery. After 4 weeks, preterm lambs were euthanized at delivery or delivered and ventilated for 2 h before euthanasia. Brains and cerebrospinal fluid (CSF) were collected for analysis of molecular and structural indices of early brain injury. FGRVENT lambs had increased oxidative cell damage and brain injury marker S100B levels compared with all other groups. Mechanical ventilation increased inflammatory marker IL-8 within the brain of FGRVENT and AGVENT lambs. Abnormalities in the neurovascular unit and increased blood–brain barrier permeability were observed in FGRVENT lambs, as well as an altered density of vascular tight junctions markers. FGR and AG preterm lambs have different responses to acute injurious mechanical ventilation, changes which appear to have been developmentally programmed in utero. Carbon Nanostructures Synthesized by Chemical Reaction Using Rongalite and Polyethyleneimine as Complex Agents J.A. González, R. C. Carrillo-Torres, M. E. Alvarez-Ramos, S. J. Castillo Nonlinear mode interactions in a counter-rotating split-cylinder flow Paloma Gutierrez-Castillo, Juan M. Lopez Journal: Journal of Fluid Mechanics / Volume 816 / 10 April 2017 The flow in a split cylinder with each half in exact counter rotation is studied numerically. The exact counter rotation, quantified by a Reynolds number $\mathit{Re}$ based on the rotation rate and radius, imparts the system with an $O(2)$ symmetry (invariance to azimuthal rotations as well as to an involution consisting of a reflection about the mid-plane composed with a reflection about any meridional plane). The $O(2)$ symmetric basic state is dominated by a shear layer at the mid-plane separating the two counter-rotating bodies of fluid, created by the opposite-signed vortex lines emanating from the two endwalls being bent to meet at the split in the sidewall. With the exact counter rotation, the additional involution symmetry allows for steady non-axisymmetric states, that exist as a group orbit. Different members of the group simply correspond to different azimuthal orientations of the same flow structure. Steady states with azimuthal wavenumber $m$ (the value of $m$ depending on the cylinder aspect ratio $\unicode[STIX]{x1D6E4}$ ) are the primary modes of instability as $\mathit{Re}$ and $\unicode[STIX]{x1D6E4}$ are varied. Mode competition between different steady states ensues, and further bifurcations lead to a variety of different time-dependent states, including rotating waves, direction-reversing waves, as well as a number of slow–fast pulse waves with a variety of spatio-temporal symmetries. Further from the primary instabilities, the competition between the vortex lines from each half-cylinder settles on either a $m=2$ steady state or a limit cycle state with a half-period-flip spatio-temporal symmetry. By computing in symmetric subspaces as well as in the full space, we are able to unravel many details of the dynamics involved. Folate and vitamin B12 concentrations are associated with plasma DHA and EPA fatty acids in European adolescents: the Healthy Lifestyle in Europe by Nutrition in Adolescence (HELENA) study I. Iglesia, I. Huybrechts, M. González-Gross, T. Mouratidou, J. Santabárbara, V. Chajès, E. M. González-Gil, J. Y. Park, S. Bel-Serrat, M. Cuenca-García, M. Castillo, M. Kersting, K. Widhalm, S. De Henauw, M. Sjöström, F. Gottrand, D. Molnár, Y. Manios, A. Kafatos, M. Ferrari, P. Stehle, A. Marcos, F. J. Sánchez-Muniz, L. A. Moreno Journal: British Journal of Nutrition / Volume 117 / Issue 1 / 14 January 2017 This study aimed to examine the association between vitamin B6, folate and vitamin B12 biomarkers and plasma fatty acids in European adolescents. A subsample from the Healthy Lifestyle in Europe by Nutrition in Adolescence study with valid data on B-vitamins and fatty acid blood parameters, and all the other covariates used in the analyses such as BMI, Diet Quality Index, education of the mother and physical activity assessed by a questionnaire, was selected resulting in 674 cases (43 % males). B-vitamin biomarkers were measured by chromatography and immunoassay and fatty acids by enzymatic analyses. Linear mixed models elucidated the association between B-vitamins and fatty acid blood parameters (changes in fatty acid profiles according to change in 10 units of vitamin B biomarkers). DHA, EPA) and n-3 fatty acids showed positive associations with B-vitamin biomarkers, mainly with those corresponding to folate and vitamin B12. Contrarily, negative associations were found with n-6:n-3 ratio, trans-fatty acids and oleic:stearic ratio. With total homocysteine (tHcy), all the associations found with these parameters were opposite (for instance, an increase of 10 nmol/l in red blood cell folate or holotranscobalamin in females produces an increase of 15·85 µmol/l of EPA (P value <0·01), whereas an increase of 10 nmol/l of tHcy in males produces a decrease of 2·06 µmol/l of DHA (P value <0·05). Positive associations between B-vitamins and specific fatty acids might suggest underlying mechanisms between B-vitamins and CVD and it is worth the attention of public health policies. A Multifaceted Approach to Reduction of Catheter-Associated Urinary Tract Infections in the Intensive Care Unit With an Emphasis on "Stewardship of Culturing" Katherine M. Mullin, Christopher S. Kovacs, Cynthia Fatica, Colette Einloth, Elizabeth A. Neuner, Jorge A. Guzman, Eric Kaiser, Venu Menon, Leticia Castillo, Marc J. Popovich, Edward M. Manno, Steven M. Gordon, Thomas G. Fraser Journal: Infection Control & Hospital Epidemiology / Volume 38 / Issue 2 / February 2017 Print publication: February 2017 Catheter-associated urinary tract infections (CAUTIs) are among the most common hospital-acquired infections (HAIs). Reducing CAUTI rates has become a major focus of attention due to increasing public health concerns and reimbursement implications. To implement and describe a multifaceted intervention to decrease CAUTIs in our ICUs with an emphasis on indications for obtaining a urine culture. A project team composed of all critical care disciplines was assembled to address an institutional goal of decreasing CAUTIs. Interventions implemented between year 1 and year 2 included protocols recommended by the Centers for Disease Control and Prevention for placement, maintenance, and removal of catheters. Leaders from all critical care disciplines agreed to align routine culturing practice with American College of Critical Care Medicine (ACCCM) and Infectious Disease Society of America (IDSA) guidelines for evaluating a fever in a critically ill patient. Surveillance data for CAUTI and hospital-acquired bloodstream infection (HABSI) were recorded prospectively according to National Healthcare Safety Network (NHSN) protocols. Device utilization ratios (DURs), rates of CAUTI, HABSI, and urine cultures were calculated and compared. The CAUTI rate decreased from 3.0 per 1,000 catheter days in 2013 to 1.9 in 2014. The DUR was 0.7 in 2013 and 0.68 in 2014. The HABSI rates per 1,000 patient days decreased from 2.8 in 2013 to 2.4 in 2014. Effectively reducing ICU CAUTI rates requires a multifaceted and collaborative approach; stewardship of culturing was a key and safe component of our successful reduction efforts. Infect Control Hosp Epidemiol 2017;38:186–188 Murine models susceptibility to distinct Trypanosoma cruzi I genotypes infection CIELO M. LEÓN, MARLENY MONTILLA, RICARDO VANEGAS, MARIA CASTILLO, EDGAR PARRA, JUAN DAVID RAMÍREZ Journal: Parasitology / Volume 144 / Issue 4 / April 2017 Chagas disease is a complex zoonosis that affects around 8 million people worldwide. This pathology is caused by Trypanosoma cruzi, a kinetoplastid parasite that shows tremendous genetic diversity evinced in six distinct Discrete Typing Units (TcI-TcVI) including a recent genotype named as TcBat and associated with anthropogenic bats. TcI presents a broad geographical distribution and has been associated with chronic cardiomyopathy. Recent phylogenetic studies suggest the existence of two genotypes (Domestic (TcIDom) and sylvatic TcI) within TcI. The understanding of the course of the infection in different mouse models by these two genotypes is not yet known. Therefore, we infected 126 animals (ICR-CD1, National Institute of Health (NIH) and Balb/c) with two TcIDom strains and one sylvatic strain for a follow-up period of 60 days. We quantified the parasitaemia, immune response and histopathology observing that the maximum day of parasitaemia was achieved at day 21 post-infection. Domestic strains showed higher parasitaemia than the sylvatic strain in the three mouse models; however in the survival curves Balb/c mice were less susceptible to infection compared with NIH and ICR-CD1. Our results suggest that the genetic background plays a fundamental role in the natural history of the infection and the sympatric TcI genotypes have relevant implications in disease pathogenesis. Evidence of in-situ Type II radio bursts in interplanetary shocks S. M. Díaz-Castillo, J. C. Martínez Oliveros, B. Calvo-Mozo Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S327 / October 2016 We present a database of 11 interplanetary shocks associated to coronal mass ejections (CMEs) observed by STEREO and Wind missions between 2006 and 2011 that show evidence of Type II radio burst. For all events, we calculated the principal characteristics of the shock driver, the intensity and geometrical configuration of the in-situ shock and checked for the existence of in-situ type II radio burst. We made a comparative analysis of two CME events (on 18 August 2010 and 4 June 2011), which are apparently associated to two or more magnetic structures which interact in space (i.e. CMEs, SIRs, CIRs). These events show varied shock configurations and intensities. We found evidence of in-situ type II radio bursts in one of the events studied, suggesting that the geometry of the shock (quasi-perpendicularity) is also critical for the generation and/or detection of radio emission in-situ. Three-dimensional instabilities and inertial waves in a rapidly rotating split-cylinder flow Juan M. Lopez, Paloma Gutierrez-Castillo Journal: Journal of Fluid Mechanics / Volume 800 / 10 August 2016 Published online by Cambridge University Press: 13 July 2016, pp. 666-687 Print publication: 10 August 2016 The nonlinear dynamics of the flow in a differentially rotating split cylinder is investigated numerically. The differential rotation, with the top half of the cylinder rotating faster than the bottom half, establishes a basic state consisting of a bulk flow that is essentially in solid-body rotation at the mean rotation rate of the cylinder and boundary layers where the bulk flow adjusts to the differential rotation of the cylinder halves, which drives a strong meridional flow. There are Ekman-like layers on the top and bottom end walls, and a Stewartson-like side wall layer with a strong downward axial flow component. The complicated bottom corner region, where the downward flow in the side wall layer decelerates and negotiates the corner, is the epicentre of a variety of instabilities associated with the local shear and curvature of the flow, both of which are very non-uniform. Families of both high and low azimuthal wavenumber rotating waves bifurcate from the basic state in Eckhaus bands, but the most prominent states found near onset are quasiperiodic states corresponding to mixed modes of the high and low azimuthal wavenumber rotating waves. The frequencies associated with most of these unsteady three-dimensional states are such that spiral inertial wave beams are emitted from the bottom corner region into the bulk, along cones at angles that are well predicted by the inertial wave dispersion relation, driving the bulk flow away from solid-body rotation. Analysis of H 2 and SiH 4 in the Deposition of pm-Si:H Thin Films by PECVD Process for Solar Cell Applications J. Plaza-Castillo, A. García-Barrientos, M. Moreno-Moreno, M. J. Arellano-Jiménez, K. Y. Vizcaíno, J.L. Bernal Morphing structure for a rudder M. A. Castillo-Acero, C. Cuerno-Rejado, M. A. Gómez-Tierno Journal: The Aeronautical Journal / Volume 120 / Issue 1230 / August 2016 Published online by Cambridge University Press: 20 June 2016, pp. 1291-1314 The modification of aerofoils with structural morphing in order to enhance aerodynamic efficiency is an active field of research. The required forced and induced displacements are, usually, out the current developments on shape memory alloys, piezoelectric actuators or multi-stable structures for commercial transport aircraft applications. This work aims to present studies for obtaining an optimum rudder structure which morphs to a pre-defined curvature that can sustain aerodynamic and internal loads in a critical certification load case for a commercial transport aircraft. It also includes the feasibility of a morphing rudder based on a zero Poisson skeleton, or close to a zero Poisson ratio panel geometrical configuration that has no transverse deformation when perpendicularly loaded and which is produced with an additive layer manufacturing process. Pressure Effect on the Deposition in the a-Si:H Films by PECVD Process for Solar Cell Applications J. Plaza-Castillo, A. Garcia-Barrientos, M. Moreno-Moreno, K.Y. Vizcaino, J.A. Hoyo-Montano, G. Valencia-Palomo Chemical and structural characterization of PbS-SiO2 Core-shell structures synthesized by ultrasonic wave-assisted chemical bath deposition P. Moreno-Wong, J. Alvarado-Rivera, Gemma Moreno, M. C. Acosta-Enriquez, S. J. Castillo Study of Corrosion Behavior of Polyuretane/nanoHidroxiapatite Hybrid Coating in Hank Solution at 25 °C G. Carbajal-De La Torre, A.B. Martinez-Valencia, A. Sanchez-Castillo, M. Villagomez-Galindo, M.A. Espinosa-Medina The study of corrosion behavior of polyurethane/nanohydroxyapatite hybrid coating in aerated Hank solution at 25 °C by Potentiodinamic and Electrochemical Impedance techniques was realized. The nanohydroxyapatite (nHA) powders were synthesized by ultrasonic assisted co-precipitation wet chemical method and then mixed with pure polyurethane (PU) during the polymerization. Results were supported by SEM morphologic characterization. Results showed that good corrosion resistance of hybrid coating, showing small corrosion product layer formation. Corrosion mechanisms are affected by an increasing of polarization resistance, promoting decreasing in the corrosion rates. Diffusion of ionic species was the governing mechanism in the corrosion behavior of polyurethane/nanohydroxyapatite hybrid coating. Expression of the histone chaperone SET/TAF-Iβ during the strobilation process of Mesocestoides corti (Platyhelminthes, Cestoda) CAROLINE B. COSTA, KARINA M. MONTEIRO, ALINE TEICHMANN, EDILEUZA D. DA SILVA, KARINA R. LORENZATTO, MARTÍN CANCELA, JÉSSICA A. PAES, ANDRÉ DE N. D. BENITZ, ESTELA CASTILLO, ROGÉRIO MARGIS, ARNALDO ZAHA, HENRIQUE B. FERREIRA Journal: Parasitology / Volume 142 / Issue 9 / August 2015 Published online by Cambridge University Press: 31 March 2015, pp. 1171-1182 The histone chaperone SET/TAF-Iβ is implicated in processes of chromatin remodelling and gene expression regulation. It has been associated with the control of developmental processes, but little is known about its function in helminth parasites. In Mesocestoides corti, a partial cDNA sequence related to SET/TAF-Iβ was isolated in a screening for genes differentially expressed in larvae (tetrathyridia) and adult worms. Here, the full-length coding sequence of the M. corti SET/TAF-Iβ gene was analysed and the encoded protein (McSET/TAF) was compared with orthologous sequences, showing that McSET/TAF can be regarded as a SET/TAF-Iβ family member, with a typical nucleosome-assembly protein (NAP) domain and an acidic tail. The expression patterns of the McSET/TAF gene and protein were investigated during the strobilation process by RT-qPCR, using a set of five reference genes, and by immunoblot and immunofluorescence, using monospecific polyclonal antibodies. A gradual increase in McSET/TAF transcripts and McSET/TAF protein was observed upon development induction by trypsin, demonstrating McSET/TAF differential expression during strobilation. These results provided the first evidence for the involvement of a protein from the NAP family of epigenetic effectors in the regulation of cestode development.
CommonCrawl
Heterologous expression of genes for bioconversion of xylose to xylonic acid in Corynebacterium glutamicum and optimization of the bioprocess M. S. Lekshmi Sundar1,2, Aliyath Susmitha1,2, Devi Rajan1, Silvin Hannibal3, Keerthi Sasikumar1,2, Volker F. Wendisch3 & K. Madhavan Nampoothiri1,2 In bacterial system, direct conversion of xylose to xylonic acid is mediated through NAD-dependent xylose dehydrogenase (xylB) and xylonolactonase (xylC) genes. Heterologous expression of these genes from Caulobacter crescentus into recombinant Corynebacterium glutamicum ATCC 13032 and C. glutamicum ATCC 31831 (with an innate pentose transporter, araE) resulted in an efficient bioconversion process to produce xylonic acid from xylose. Process parameters including the design of production medium was optimized using a statistical tool, Response Surface Methodology (RSM). Maximum xylonic acid of 56.32 g/L from 60 g/L xylose, i.e. about 76.67% of the maximum theoretical yield was obtained after 120 h fermentation from pure xylose with recombinant C. glutamicum ATCC 31831 containing the plasmid pVWEx1 xylB. Under the same condition, the production with recombinant C. glutamicum ATCC 13032 (with pVWEx1 xylB) was 50.66 g/L, i.e. 69% of the theoretical yield. There was no significant improvement in production with the simultaneous expression of xylB and xylC genes together indicating xylose dehydrogenase activity as one of the rate limiting factor in the bioconversion. Finally, proof of concept experiment in utilizing biomass derived pentose sugar, xylose, for xylonic acid production was also carried out and obtained 42.94 g/L xylonic acid from 60 g/L xylose. These results promise a significant value addition for the future bio refinery programs. Made C. glutamicum recombinants with genes for xylose to xylonic acid conversion. Bioprocess development using C. glutamicum for xylonic acid. Conversion of biomass derived xylose to xylonic acid. D-xylonic acid, an oxidation product of xylose, is a versatile platform chemical with multifaceted applications in the fields of food, pharmaceuticals, and agriculture. It is considered by the U.S. Department of Energy to be one of the 30 chemicals of highest value because it can be used in a variety of applications, including as a dispersant, pH regulator, chelator, antibiotic clarifying agent and health enhancer (Byong-Wa et al. 2006; Toivari et al. 2012). Xylonic acid may also be used as a precursor for bio-plastic, polymer synthesis and other chemicals such as 1,2,4-butanetriol (Niu Wei et al. 2003). Although xylonic acid production is feasible via chemical oxidation using platinum or gold catalysts, selectivity is relatively poor (Yim et al. 2017). As the pentose sugar catabolism is restricted to the majority of the industrial microbes (Wisselink et al. 2009), microbial conversion of xylose to xylonic acid gained interest. As of now, biogenic production of xylonic acid has been accomplished in various microorganisms, including Escherichia coli, Saccharomyces cerevisiae and Kluyveromyces lactis by introducing xylB (encoding xylose dehydrogenase) and xylC (encoding xylonolactonase) genes from Caulobacter crescentus or Trichoderma reesei (Nygård et al. 2011; Toivari et al. 2012; Cao et al. 2013). As xylose is the monomeric sugar required for xylonic acid production, a lot of interest has been paid on utilizing xylose generated from lignocellulosic biomass (Lin et al. 2012). Bio-transformation of lignocellulosic biomass into platform chemicals is possible only through its conversion to monomeric sugars, mostly by pretreatment, i.e. pre-hydrolysis by alkali or acid at higher temperature or via enzymatic hydrolysis. Monomeric hexose and pentose sugars are generated from lignocellulosic biomass along with inhibitory by-products like furfural, 5-hydroxymethylfurfural, 4-hydroxybenzaldehyde that affect the performance of microbial production hosts (Matano et al. 2014). The concept of biomass refinery is getting more and more attraction for the cost effectiveness of the 2G ethanol program. Microbial production of value-added products such as biopolymers, bioethanol, butanol, organic acids and xylitol were reported utilizing the C5 stream generated by the pretreatment of biomass by different microbes like Pichia stipitis, Clostridium acetobutylicum, Candida guilliermondii, Bacillus coagulans (Mussatto and Teixeira 2010; Ou et al. 2011; de Arruda et al. 2011; Lin et al. 2012; Raganati et al. 2015). Although some of the industrial strains are capable of pentose fermentation, most of them are sensitive to inhibitors of lignocellulosic biomass pretreatment. However, Corynebacterium glutamicum showed remarkable resistance towards these inhibitory by-products under growth-arrested conditions (Sakai et al. 2007). C. glutamicum is a Gram-positive, aerobic, rod-shaped, non-spore forming soil actinomycete which exhibits numerous ideal intrinsic attributes as a microbial factory to produce amino acids and high-value chemicals (Heider and Wendisch 2015; Hirasawa and Shimizu 2016; Yim et al. 2017). This bacterium has been successfully engineered towards producing a broad range of products, including diamines, amino-carboxylic acids, diacids, recombinant proteins and even industrial enzymes (Becker et al. 2018; Baritugo et al. 2018). A lot of metabolic resurrections were reported in C. glutamicum for the production of chemicals like amino acids, sugar acid, xylitol and biopolymers from hemicellulosic biomasses such as wheat bran, rice straw and sorghum stover (Gopinath et al. 2011; Wendisch et al. 2016; Dhar et al. 2016). Since C. glutamicum lacks the genes for the metabolic conversion of xylose to xylonic acid, the heterologous expression of xylose dehydrogenase (xylB) and xylonolactonase (xylC) genes from Caulobacter crescentus was attempted. In addition to ATCC 13032 wild type, we also explored the C.glutamicum ATCC 31831 culture which contains a pentose transporter gene (araE) which enables the uptake of pentose sugar (Kawaguchi et al. 2009; Choi et al. 2019). Both xylB and xylC genes individually, as well as together as xylBC, were amplified from xylose operon of C. crescentus and the plasmids were transformed to both C. glutamicum strains and checked the xylonic acid production. Microbial strains and culture conditions Microbial strains and plasmids used in this study are listed in Table 1. For genetic manipulations, E. coli strains were grown at 37 °C in Luria–Bertani (LB) medium. C. glutamicum strains were grown at 30 °C in Brain Heart Infusion (BHI) medium. Where appropriate, media were supplemented with antibiotics. The final antibiotic concentrations for E. coli and C. glutamicum were 25 μg/ml of kanamycin. Culture growth was measured spectrophotometrically at 600 nm using a UV–VIS spectrophotometer (UVA-6150, Shimadzu, Japan). Table 1 Microbial strains, plasmids and primers used in the study Molecular techniques and strain construction Standard molecular techniques were done according to the protocol described by (Sambrook et al. 2006). Genomic DNA isolation was done with Gen Elute genomic DNA isolation kit (Sigma, India). Plasmid isolation was done using Qiagen plasmid midi kit (Qiagen, Germany). Polymerase chain reaction (PCR) was performed using automated PCR System (My Cycler, Eppendorff, Germany) in a total volume of 50 μl with 50 ng of DNA, 0.2 mM dNTP in PrimeSTAR™ buffer (Takara), and 1.25 U of PrimeSTAR™ HS DNA polymerase (Takara) and the PCR product was purified by QIA quick PCR purification kit (Qiagen, Germany) as per the instructions provided by the manufacturers. Competent E. coli DH5α cells were prepared by Transformation and Storage Solution (TSS) method and transformed by heat shock (Chung and Miller 1993). The C. glutamicum competent cells were electroporated to achieve the transformation (van der Rest et al. 1999). Xylose dehydrogenase (xylB) and xylonolactonase (xylC) and xylBC genes together of Caulobacter crescentus were amplified from the xylose-inducible xylXABCD operon (CC0823–CC0819) (Stephens et al. 2007) by polymerase chain reaction (PCR) with appropriate primers as shown in Table 1 and the purified PCR products (747 bp xylB, 870 bp xylC and 1811 bp xylBC) were verified by sequencing and cloned into the restriction digestion site (Bam HI/Pst I) of pVWEx1 shuttle vector. The engineered plasmids so-called pVWEx1xylB, pVWEx1xylC and pVWEx1xylBC were transformed into E. coli DH5α and the transformants bearing pVWEx1 derivative were screened in LB medium supplemented with kanamycin (25 µg mL−1). Competent cells of C. glutamicum ATCC 13032 and ATCC 31831 were prepared and the plasmids were electroporated into both the C. glutamicum strains with parameters set at 25 μF, 600 Ω and 2.5 kV, yielding a pulse duration of 10 ms and the positive clones were selected in LBHIS kanamycin (25 µg mL−1) plates (van der Rest et al. 1999). Fermentative production of xylonic acid by C. glutamicum transformants For xylonic acid production, C. glutamicum was inoculated in 10 ml of liquid medium (BHI broth) in a test tube and grown overnight at 30 °C under aerobic condition with shaking at 200 rpm. An aliquot of the 10 ml culture was used to inoculate 100 ml CGXII production medium (Keilhauer et al. 1993) containing 35 g/L xylose and 5 g/L glucose as carbon sources, kanamycin (25 µg mL−1). IPTG (1 mM) induction was done along with the inoculation. Fermentation was carried out in 250 mL Erlenmeyer flasks containing 100 mL production medium and incubated as described above. Samples were withdrawn at regular intervals to determine sugar consumption and xylonic acid production. Since xylB transformant was found to be the best producer, a comparison of it with C. glutamicum ATCC 13032 having xylB gene was also carried out to see whether the inbuilt araE pentose transporter in ATCC 31831 has any advantage over wild type ATCC 13032. Media engineering by response surface methodology (RSM) Response surface methodology was applied to identify the operating variables that have a significant effect on xylonic acid production. A Box Behnken experimental design (BBD) (Box and Behnken 1960) with four independent variables (selected based on single parameter study, data not shown) that may affect xylonic acid production, including (NH4)2SO4 (2.5–12.5 g/L), urea (4.5–18.5 g/L), xylose (30–90 g/L) and inoculum (7.5–1.125%) were studied at three levels − 1, 0 and + 1 which correspond to low, medium and high values respectively. Responses were measured as titer (g/L) of xylonic acid. The statistical as well as numerical analysis of the model was evaluated by analysis of variance (ANOVA) which included p-value, regression coefficient, effect values and F value using Minitab 17 software. Studies were performed using C. glutamicum ATCC 31831 harboring pVWEx1-xylB. Dilute acid pretreatment of the biomass The rice straw was crushed into fine particle (size of 10 mm) and pre-soaked in dilute acid (H2SO4) for 30 min, pretreated with 15% (w/w) biomass loading and 1% (w/w) acid concentration at 121 °C for 1 h. After cooling, the mixture was neutralized to pH 6–7 using 10 N NaOH. The liquid portion, i.e. acid pretreated liquor (APL) rich in pentose sugar (xylose) was separated from the pretreated slurry and lyophilized to concentrate to get desired xylose level which was estimated prior to the shake flask fermentation studies. Quantification of sugars and xylonic acid in fermentation broth The qualitative and quantitative analysis of sugars and sugar acid (xylonic acid) was performed using an automated high-performance liquid chromatography (HPLC) system (Prominence UFLC, Shimadzu, Japan) equipped with auto-sampler, column oven and RI Detector. The monomeric sugars (xylose and glucose) were resolved with Phenomenex Rezex RPM Pb+ cation exchange monosaccharide column (300 × 7.5 mm) operated at 80 °C. MilliQ water (Millipore) with a flow rate of 0.6 mL/min was used as the mobile phase. For xylonic acid detection, Phenomenex organic acid column (250 mm × 4.6 mm × 5 µm) operated at 55 °C was used with a mobile phase of 0.01 N H2SO4 at a flow rate of 0.6 mL/min. The samples were centrifuged (13,000 rpm for 10 min at 4 °C) and filtered using 0.2 µm filters (Pall Corporation, Port Washington, New York) for analysis. Xylose utilization and xylonic acid production by C. glutamicum transformants Corynebacterium glutamicum recombinants expressing xylB, xylC and xylBC were constructed. The xylose dehydrogenase and xylonolactonase genes were cloned into IPTG-inducible expression vector pVWEx1 and transformed into C. glutamicum ATCC 31831. To check xylonic acid production from xylose, the C. glutamicum ATCC 31831 transformants harboring pVWEx1-xylB, pVWEx1-xylC and pVWEx1-xylBC were cultivated in CGXII medium containing 5 g/L of glucose as the carbon source for initial cell growth and 35 g/L of xylose as the substrate for xylonic acid production. Cell growth, xylose consumption and xylonic acid production were analyzed during the incubation for a desired period of interval. From analysis, it is clear that compared to the control strain with empty vector (Fig. 1a), the transformants harboring pVWEx1-xylB picked up growth very fast compared to the other transformants and utilized xylose effectively (77.2% utilization after 120 h) and resulted in maximum production of 32.5 g/L xylonic acid (Fig. 1b). The pVWEx1-xylBC harboring strain produced 26 g/L xylonic acid (Fig. 1d), whereas pVWEx1-xylC showed neither any significant xylose uptake nor xylonic acid production (Fig. 1c). Xylose consumption (35 g/L) (closed triangle), xylonic acid production (closed circle) and growth curve (open circle) of C. glutamicum ATCC 31831 (a) pVWEx1 (b) pVWEx1-xyl B (c) pVWEx1-xylC (d) pVWEx1-xyl BC respectively Box–Behnken experimental design (BBD) and operational parameter optimization The objective of the experimental design was medium engineering for maximum xylonic acid production. There were a total of 15 runs for optimizing the four individual parameters in the current BBD. Experimental design and xylonic acid yield are presented in Table 2. The polynomial equation obtained for the model was as below: $$\begin{aligned} {\text{Xylonic acid }}\left( {{{\text{g}} \mathord{\left/ {\vphantom {{\text{g}} {\text{L}}}} \right. \kern-0pt} {\text{L}}}} \right)\; = \; & - 4 8. 7 { }{-} \, 0. 4 5 {\text{ X}}_{ 1} + { 3}. 5 {\text{ X}}_{ 2} + 0. 2 20{\text{ X}}_{ 3} + { 2}.0 5 8 {\text{ X}}_{ 4} \\ & {-} \, 0.0 1 9 {\text{ X}}_{ 1}^{ 2} {-} \, 0. 2 1 3 9 {\text{ X}}_{ 2}^{ 2} {-} \, 0.0 4 2 3 {\text{ X}}_{ 3}^{ 2} {-} \, 0.0 1 9 4 3 {\text{ X}}_{ 4}^{ 2} \\ & {-} \, 0.0 7 5 {\text{ X}}_{ 1} {\text{X}}_{ 2} + \, 0.0 4 1 6 {\text{ X}}_{ 1} {\text{X}}_{ 3} {-} \, 0.0 1 1 9 {\text{ X}}_{ 1} {\text{X}}_{ 4} \\ & + \, 0. 5 2 6 {\text{ X}}_{ 2} {\text{X}}_{ 3} + \, 0.0 4 8 2 {\text{ X}}_{ 2} {\text{X}}_{ 4} {-} \, 0.00 1 2 8 {\text{ X}}_{ 3} {\text{X}}_{ 4} \\ \end{aligned}$$ where X1, X2, X3 and X4 are xylose, (NH4)2SO4, urea and inoculum concentration respectively. Maximum production efficiency (0.47 g−1 L−1 h−1) was observed with Run No.13 where the concentration of parameters was urea 11.5 g/L, xylose 60 g/L, (NH4)2SO4 7.5 g/L and inoculum 1.125% and xylonic acid titer was 56.32 g/L. It indicates that (NH4)2SO4, inoculum concentration and xylose have a significant positive effect than urea on xylonic acid yield. Table 2 Box–Behnken experimental design matrix with experimental values of xylonic acid production by Corynebacterium glutamicum ATCC 31831 Response surface curves were plotted to find out the interaction of variables and to determine the optimum level of each variable for maximum response. The contour plot showing the interaction between a pair of factors on xylonic acid yield is given in Fig. 2a–f. Major interactions studied are of inoculum and xylose concentration (a), xylose and urea concentration (b), (NH4)2SO4 and urea concentration (c), effect of inoculum and (NH4)2SO4 concentration (d), effect of (NH4)2SO4 and xylose concentration (e) and the interaction of inoculum and urea concentration (f). Response surface methodology-contour plots showing the effect of various parameters on xylonic acid production by C.glutamicum ATCC 31831. a Effect of inoculum and xylose. b Effect of xylose and urea. c Effect of (NH4)2SO4 and urea. d Effect of inoculum and (NH4)2SO4. e Effect of (NH4)2SO4 and xylose f Effect of inoculum and urea The ANOVA of response for xylonic acid is shown in Table 3. The R2 value explains the variability in the xylonic acid yield associated with the experimental factors to the extent of 97.48%. Table 3 Analysis of variance for xylonic acid production using C. glutamicum ATCC 31831 Role of araE pentose transporter for enhanced uptake of xylose and xylonic acid production Using the designed medium standardized for C. glutamicum ATCC 31831, which possesses an arabinose and xylose transporter encoded by araE, a comparative production study was carried out with recombinant C. glutamicum ATCC 13032. Both the strains grew well in the CGXII production medium and metabolized xylose to xylonic acid. After 120 h fermentation, the recombinant strain, ATCC 13032 produced 50.66 g/L of xylonic acid whereas ATCC 31831 produced 56.32 g/L (Fig. 3). It was observed that better uptake of the pentose sugar was also exhibited by C. glutamicum ATCC 31831, i.e., 75% consumption compared to 60% by ATCC 13032 after 120 h fermentation and same the case with culture growth where ATCC 31831 showed better growth (10× dilution of culture broth for spectrophotometric reading (Additional file 1: Figure S1). Xylonic acid production by C. glutamicum ATCC 13032 (open bar) and C. glutamicum ATCC 31831 (closed bar) harbouring plasmid pVWEx1-xylB Xylonic acid from rice straw hydrolysate Fermentation was carried out in rice straw hydrolysate using C. glutamicum ATCC 31831 (pVWEx1-xylB). The strain could grow in different xylose concentrations (of 20, 40, and 60 g/L) in rice straw hydrolysate, and after 120 h fermentation, maximum titer obtained was 42.94 g/L xylonic acid from 60 g/L xylose (Fig. 4). A production yield of 58.48% xylonic acid in hydrolysate is remarkable for sugar acid production with engineered strain of C. glutamicum which is quite tolerant to the inhibitors present in the hydrolysate. Xylose utilization (open symbols) and xylonic acid production (closed symbols) by C. glutamicum ATCC 31831 (pVWEx1-xylB) in rice straw hydrolysate containing different concentrations of xylose 20 g/L (open diamond), 40 g/L (open square) and 60 g/L (open circle). Xylonic acid production from 20 g/L xylose (closed diamond), 40 g/L xylose (closed square) and 60 g/L xylose (closed circle) Heterologous expression of genes for the production of varied value-added chemicals were successfully carried out in C. glutamicum, for example, the production of amino acids, sugar alcohol, organic acid, diamines, glycolate and 1,5-diaminopentane (Buschke et al. 2013; Meiswinkel et al. 2013; Zahoor et al. 2014; Pérez-García et al. 2016; Dhar et al. 2016). C. glutamicum being a versatile industrial microbe and the availability of genetic engineering tools makes it a rapid and rational manipulation host for diverse platform chemicals. Most corynebacteria are known not to utilize xylose as carbon source. The absence of xylose metabolizing genes restricts the growth of Corynebacterium in pentose rich medium. To develop an efficient bioconversion system for xylonic acid synthesis, the genes of Caulobacter crescentus were expressed in C. glutamicum. The resulting transformants C. glu-pVWEx1-xylB and C.glu-pVWEx1-xylBC were able to grow in mineral medium containing xylose and converted it into corresponding pentonic acid. Xylose can be metabolized in four different routes (I) The oxido-reductase pathway, (II) The isomerase pathway, (III) The Weimberg pathway, an oxidative pathway and (IV) The Dahms pathway (Cabulong et al. 2018). Xylose once inside the cell gets converted to xylonolactone and then into xylonic acid on the expression of two genes namely, xylB (xylose dehydrogenase) and xylC (xylonolactonase). These two enzymes are involved in both the Weimberg and Dahms pathway where xylose is metabolized to xylonic acid (Brüsseler et al. 2019). In the present study, it is observed that only the xylose dehydrogenase enzyme activity is good enough for xylonic acid production. Without the dehydrogenase activity, the lactonase activity alone cannot do the conversion of xylose to xylonic acid. Further, the xylonolactonase expression along with xylose dehydrogenase resulted in xylonic acid production but not that efficient as dehydrogenase alone with the case of C. glutamicum. It is reported that, xylonolactone once formed can be converted to xylonic acid either by the spontaneous oxidation of lactone or through the enzymatic hydrolysis of xylonolactonase enzyme (Buchert and Viikari 1988). Corynebacterium glutamicum being an aerobic organism, direct oxidation of xylonolactone to xylonic acid is more favorable inside the cell. Previous studies have also shown that xylose dehydrogenase (xylB) activity alone can result in the production of xylonic acid (Yim et al. 2017). Corynebacterium glutamicum ATCC 31831 grew on pentose as the sole carbon source. The gene cluster responsible for pentose utilization comprised a six-cistron transcriptional unit with a total length of 7.8 kb. The sequence of the C. glutamicum ATCC 31831 ara gene cluster containing gene araE, encodes pentose transporter, facilitates the efficient uptake of pentose sugar (Kawaguchi et al. 2009). Previous studies have also reported the role of araE pentose transporter in Corynebacterium glutamicum ATCC 31831 and its exploitation for the production of commodity chemicals like 3HP and ethanol (Becker et al. 2018). In the present study, Corynebacterium glutamicum ATCC 31831 with an inbuilt araE pentose transporter exhibited effectual consumption of xylose as well as its conversion to xylonic acid. Further studies have to be done to explore the role of the same araE pentose transporter as an exporter for xylonic acid. Micrococcus spp., Pseudomonas, Kluveromyces lactis, Caulobacter, Enterobacter, Gluconobacter, Klebsiella and Pseudoduganella danionis (ISHIZAKI et al. 1973; Buchert et al. 1988; Buchert and Viikari 1988; Toivari et al. 2011; Wiebe et al. 2015; Wang et al. 2016; Sundar Lekshmi et al. 2019) are the non-recombinant strains reported for xylonic acid production. Among which Gluconobacter oxydans is the prominent wild-type strain exhibits higher titers of xylonic acid up to 100 g L−1 (Toivari et al. 2012). Although these strains are capable of producing xylonic acid from pure sugar, they fail to perform as an industrial strain since some are opportunistic pathogen grade and they are not tested in hydrolysate medium may be due to their lower tolerance towards lignocellulosic inhibitors. There was an earlier report on recombinant C. glutamicum ATCC 13032 produced 6.23 g L−1 of xylonic acid from 20 g L−1 of xylan (Yim et al. 2017). In this study they have employed multiple modules, (i) xylan degradation module (ii) conversion module from xylose to xylonic acid by expression of xdh gene and (iii) xylose transport module by expression of xylE gene, and optimized gene expression introducing promoters (Yim et al. 2017). The product titers with C. glutamicum ATCC 31831 presented in this study are comparable with other wild type and recombinant strains (Table 4) and the volumetric productivity in the feed phase can outperform the titers published employing the recombinant C. glutamicum ATCC 13032. Table 4 Comparison of xylonic acid production and productivity by the best xylonic acid producers Media engineering was carried out with the statistical tool response surface methodology (RSM) for the enhanced production of xylonic acid. The Box–Behnken model with experimental values containing 15 runs was designed for the optimization study. RSM aided to narrow down the most influencing parameters and its optimization on xylonic acid production. The engineered strain produced up to 56.3 g/L of xylonic acid and is characterized by high volumetric productivity and maximum product yield of 76.67% under optimized conditions applying defined xylose/glucose mixtures in synthetic medium. One of the major challenges is the range of acidic and furan aldehyde compounds released from lignocellulosic pre-treatment. Here, the recombinant C. glutamicum ATCC 31831 could resist the inhibitors present in rice straw hydrolysate and produced xylonic acid nearly to 58.5% of the maximum possible yield. The challenges involve getting sufficient xylose after pretreatment and also the separation of xylonic acid from the fermented broth. For the industrial application, downstream processing of xylonic acid is very important. Ethanol precipitation and product recovery by extraction are the two interesting options described for the purification of xylonic acid from the fermentation broth (Liu et al. 2012). With this industrially streamlined recombinant strain a highly profitable bioprocess to produce xylonic acid from lignocellulosic biomass as a cost-efficient second-generation substrate is well within the reach. The one-step conversion of xylose to xylonic acid and the bioprocess developed in the present study favors pentose sugar utilization in rice straw in a straight forward and cost-effective method. The proof of concept showed the simultaneous utilization of biomass-derived sugars (C5 and C6) and it has to be investigated in detail. All data generated or analysed during this study are included in this published article and its additional files. Abe S, Takayama K-I, Kinoshita S (1967) Taxonomical studies on glutamic acid-producing bacteria. J Gen Appl Microbiol 13:279–301. https://doi.org/10.2323/jgam.13.279 Baritugo K-A, Kim HT, David Y, Choi J, Hong SH, Jeong KJ, Choi JH, Joo JC, Park SJ (2018) Metabolic engineering of Corynebacterium glutamicum for fermentative production of chemicals in biorefinery. Appl Microbiol Biotechnol 102:3915–3937. https://doi.org/10.1007/s00253-018-8896-6 Becker J, Rohles CM, Wittmann C (2018) Metabolically engineered Corynebacterium glutamicum for bio-based production of chemicals, fuels, materials, and healthcare products. Metab Eng. https://doi.org/10.1016/j.ymben.2018.07.008 Box GEP, Behnken DW (1960) Some new three level designs for the study of quantitative variables. Technometrics 2:455–475. https://doi.org/10.1080/00401706.1960.10489912 Brüsseler C, Späth A, Sokolowsky S, Marienhagen J (2019) Alone at last!—heterologous expression of a single gene is sufficient for establishing the five-step Weimberg pathway in Corynebacterium glutamicum. Metab Eng Commun 9:e00090. https://doi.org/10.1016/j.mec.2019.e00090 Buchert J, Viikari L (1988) The role of xylonolactone in xylonic acid production by Pseudomonas fragi. Appl Microbiol Biotechnol 27:333–336. https://doi.org/10.1007/BF00251763 Buchert J, Puls J, Poutanen K (1988) Comparison of Pseudomonas fragi and Gluconobacter oxydans for production of xylonic acid from hemicellulose hydrolyzates. Appl Microbiol Biotechnol 28:367–372. https://doi.org/10.1007/BF00268197 Buschke N, Becker J, Schäfer R, Kiefer P, Biedendieck R, Wittmann C (2013) Systems metabolic engineering of xylose-utilizing Corynebacterium glutamicum for production of 1,5-diaminopentane. Biotechnol J 8:557–570. https://doi.org/10.1002/biot.201200367 Byong-Wa C, Benita D, Macuch PJ, Debbie W, Charlotte P, Ara J (2006) The development of cement and concrete additive. Appl Biochem Biotechnol 131:645–658. https://doi.org/10.1385/abab:131:1:645 Cabulong RB, Lee W-K, Bañares AB, Ramos KRM, Nisola GM, Valdehuesa KNG, Chung W-J (2018) Engineering Escherichia coli for glycolic acid production from d-xylose through the Dahms pathway and glyoxylate bypass. Appl Microbiol Biotechnol 102:2179–2189. https://doi.org/10.1007/s00253-018-8744-8 Cao Y, Xian M, Zou H, Zhang H (2013) Metabolic engineering of Escherichia coli for the production of xylonate. PLoS ONE 8:e67305. https://doi.org/10.1371/journal.pone.0067305 Choi JW, Jeon EJ, Jeong KJ (2019) Recent advances in engineering Corynebacterium glutamicum for utilization of hemicellulosic biomass. Curr Opin Biotechnol 57:17–24. https://doi.org/10.1016/j.copbio.2018.11.004 Chung CT, Miller RH (1993) Preparation and storage of competent Escherichia coli cells. Methods Enzymol 218:621–627. https://doi.org/10.1016/0076-6879(93)18045-E de Arruda PV, de Cássia Lacerda Brambilla Rodrigu R, da Silva DDV, de Almeida Felipe M (2011) Evaluation of hexose and pentose in pre-cultivation of Candida guilliermondii on the key enzymes for xylitol production in sugarcane hemicellulosic hydrolysate. Biodegradation 22:815–822. https://doi.org/10.1007/s10532-010-9397-1 Dhar KS, Wendisch VF, Nampoothiri KM (2016) Engineering of Corynebacterium glutamicum for xylitol production from lignocellulosic pentose sugars. J Biotechnol 230:63–71. https://doi.org/10.1016/j.jbiotec.2016.05.011 Gopinath V, Meiswinkel TM, Wendisch VF, Nampoothiri KM (2011) Amino acid production from rice straw and wheat bran hydrolysates by recombinant pentose-utilizing Corynebacterium glutamicum. Appl Microbiol Biotechnol 92:985–996. https://doi.org/10.1007/s00253-011-3478-x Hanahan D (1983) Studies on transformation of Escherichia coli with plasmids. J Mol Biol 166:557–580. https://doi.org/10.1016/S0022-2836(83)80284-8 Heider SAE, Wendisch VF (2015) Engineering microbial cell factories: metabolic engineering of Corynebacterium glutamicum with a focus on non-natural products. Biotechnol J 10:1170–1184. https://doi.org/10.1002/biot.201400590 Hirasawa T, Shimizu H (2016) Recent advances in amino acid production by microbial cells. Curr Opin Biotechnol 42:133–146. https://doi.org/10.1016/j.copbio.2016.04.017 Ishizaki H, Ihara T, Yoshitake J, Shimamura M, Imai T (1973) d-Xylonic acid production by Enterobacter cloacae. J Agric Chem Soc Jpn 47:755–761. https://doi.org/10.1271/nogeikagaku1924.47.755 Johanna B, Liisa V (1988) Oxidative d-xylose metabolism of Gluconobacter oxydans. Appl Microbiol Biotechnol 29:375–379 Kawaguchi H, Sasaki M, Vertes AA, Inui M, Yukawa H (2009) Identification and functional analysis of the gene cluster for l-arabinose utilization in Corynebacterium glutamicum. Appl Environ Microbiol 75:3419–3429. https://doi.org/10.1128/AEM.02912-08 Keilhauer C, Eggeling L, Sahm H (1993) Isoleucine synthesis in Corynebacterium glutamicum: molecular analysis of the ilvB-ilvN-ilvC operon. J Bacteriol 175:5595–5603. https://doi.org/10.1128/JB.175.17.5595-5603.1993 Kinoshita S, Udaka S, Shimono M (2004) Studies on the amino acid fermentation Part 1 Production of l-glutamic acid by various microorganisms. J Gen Appl Microbiol 50(6):331–343 Lin T-H, Huang C-F, Guo G-L, Hwang W-S, Huang S-L (2012) Pilot-scale ethanol production from rice straw hydrolysates using xylose-fermenting Pichia stipitis. Bioresour Technol 116:314–319. https://doi.org/10.1016/j.biortech.2012.03.089 Liu H, Valdehuesa KNG, Nisola GM, Ramos KRM, Chung W-J (2012) High yield production of d-xylonic acid from d-xylose using engineered Escherichia coli. Bioresour Technol 115:244–248. https://doi.org/10.1016/j.biortech.2011.08.065 Matano C, Meiswinkel TM, Wendisch VF (2014) Amino acid production from rice straw hydrolyzates. In: Wheat and rice in disease prevention and health. pp 493–505 Meijnen JP, De Winde JH, Ruijssenaars HJ (2009) Establishment of oxidative D-xylose metabolism in Pseudomonas putida S12. Appl Environ Microbiol 75:2784–2791. https://doi.org/10.1128/AEM.02713-08 Meiswinkel TM, Gopinath V, Lindner SN, Nampoothiri KM, Wendisch VF (2013) Accelerated pentose utilization by Corynebacterium glutamicum for accelerated production of lysine, glutamate, ornithine and putrescine. Microb Biotechnol 6:131–140. https://doi.org/10.1111/1751-7915.12001 Mussatto SI, Teixeira JA (2010) Lignocellulose as raw material in fermentation processes. Microbial Biotechnol 2:897–907 Nygård Y, Toivari MH, Penttilä M, Ruohonen L, Wiebe MG (2011) Bioconversion of d-xylose to d-xylonate with Kluyveromyces lactis. Metab Eng 13:383–391. https://doi.org/10.1016/j.ymben.2011.04.001 Ou MS, Ingram LO, Shanmugam KT (2011) L(+)-Lactic acid production from non-food carbohydrates by thermotolerant Bacillus coagulans. J Ind Microbiol Biotechnol 38:599–605. https://doi.org/10.1007/s10295-010-0796-4 Pérez-García F, Peters-Wendisch P, Wendisch VF (2016) Engineering Corynebacterium glutamicum for fast production of l-lysine and l-pipecolic acid. Appl Microbiol Biotechnol 100:8075–8090. https://doi.org/10.1007/s00253-016-7682-6 Peters-Wendisch PG, Schiel B, Wendisch VF, Katsoulidis E, Möckel B, Sahm H, Eikmanns BJ (2001) Pyruvate carboxylase is a major bottleneck for glutamate and lysine production by Corynebacterium glutamicum. J Mol Biol Biotechnol 3(2):295–300 Raganati F, Olivieri G, Götz P, Marzocchella A, Salatino P (2015) Butanol production from hexoses and pentoses by fermentation of Clostridium acetobutylicum. Anaerobe 34:146–155. https://doi.org/10.1016/j.anaerobe.2015.05.008 Sakai S, Tsuchida Y, Okino S, Ichihashi O, Kawaguchi H, Watanabe T, Inui M, Yukawa H (2007) Effect of lignocellulose-derived inhibitors on growth of and ethanol production by growth-arrested Corynebacterium glutamicum R. Appl Environ Microbiol 73:2349–2353. https://doi.org/10.1128/AEM.02880-06 Sambrook BJ, Maccallum P, Russell D (2006) Molecular cloning: a laboratory manual to order or request additional information. Mol Clon 1:1–3 Stephens C, Christen B, Fuchs T, Sundaram V, Watanabe K, Jenal U (2007) Genetic analysis of a novel pathway for d-xylose metabolism in Caulobacter crescentus. J Bacteriol 189:2181–2185. https://doi.org/10.1128/JB.01438-06 Sundar Lekshmi MS, Susmitha A, Soumya MP, Keerthi Sasikumar, Nampoothiri Madhavan K (2019) Bioconversion of d-xylose to d-xylonic acid by Pseudoduganella danionis. Indian J Exp Biol 57:821–824 Toivari MH, Ruohonen L, Richard P, Penttilä M, Wiebe MG (2010) Saccharomyces cerevisiae engineered to produce D-xylonate. Appl Microbiol Biotechnol 88:751–760. https://doi.org/10.1007/s00253-010-2787-9 Toivari MH, Penttil M, Ruohonen L, Wiebe MG, Nyg Y (2011) Bioconversion of d-xylose to d-xylonate with Kluyveromyces lactis. Metab Eng 13:383–391. https://doi.org/10.1016/j.ymben.2011.04.001 Toivari M, Nygård Y, Kumpula EP, Vehkomäki ML, Benčina M, Valkonen M, Maaheimo H, Andberg M, Koivula A, Ruohonen L, Penttilä M, Wiebe MG (2012) Metabolic engineering of Saccharomyces cerevisiae for bioconversion of d-xylose to d-xylonate. Metab Eng 14:427–436. https://doi.org/10.1016/j.ymben.2012.03.002 van der Rest ME, Lange C, Molenaar D (1999) A heat shock following electroporation induces highly efficient transformation of Corynebacterium glutamicum with xenogeneic plasmid DNA. Appl Microbiol Biotechnol 52:541–545. https://doi.org/10.1007/s002530051557 Wang C, Wei D, Zhang Z, Wang D, Shi J, Kim CH, Jiang B, Han Z, Hao J (2016) Production of xylonic acid by Klebsiella pneumoniae. Appl Microbiol Biotechnol 100:10055–10063. https://doi.org/10.1007/s00253-016-7825-9 Wei Niu, Mapitso Molefe N, Frost JW (2003) Microbial synthesis of the energetic material precursor 1,2,4-butanetriol. J Am Chem Soc 125:12998–12999 Wendisch VF, Brito LF, Gil Lopez M, Hennig G, Pfeifenschneider J, Sgobba E, Veldmann KH (2016) The flexible feedstock concept in industrial biotechnology: metabolic engineering of Escherichia coli, Corynebacterium glutamicum, Pseudomonas, Bacillus and yeast strains for access to alternative carbon sources. J Biotechnol 234:139–157. https://doi.org/10.1016/j.jbiotec.2016.07.022 Wiebe MG, Nygård Y, Oja M, Andberg M, Ruohonen L, Koivula A, Penttilä M, Toivari M (2015) A novel aldose-aldose oxidoreductase for co-production of d-xylonate and xylitol from d-xylose with Saccharomyces cerevisiae. Appl Microbiol Biotechnol 99:9439–9447. https://doi.org/10.1007/s00253-015-6878-5 Wisselink HW, Toirkens MJ, Wu Q, Pronk JT, van Maris AJA (2009) Novel evolutionary engineering approach for accelerated utilization of glucose, xylose, and arabinose mixtures by engineered Saccharomyces cerevisiae strains. Appl Environ Microbiol 75:907–914. https://doi.org/10.1128/AEM.02268-08 Yim SS, Choi JW, Lee SH, Jeon EJ, Chung WJ, Jeong KJ (2017) Engineering of Corynebacterium glutamicum for consolidated conversion of hemicellulosic biomass into xylonic acid. Biotechnol J 12:1–9. https://doi.org/10.1002/biot.201700040 Zahoor A, Otten A, Wendisch VF (2014) Metabolic engineering of Corynebacterium glutamicum for glycolate production. J Biotechnol 192:366–375. https://doi.org/10.1016/j.jbiotec.2013.12.020 The first author LS acknowledges the Senior Research Fellowship (SRF) by Council of Scientific and Innovative Research (CSIR), New Delhi. KMN and VFW acknowledge the financial assistance from DBT, New Delhi BMBF, and Germany to work on Corynebacterium glutamicum. The study is funded by DBT, New Delhi and BMBF, Germany under Indo German collaboration. Microbial Processes and Technology Division, CSIR–National Institute for Interdisciplinary Science and Technology (NIIST), Thiruvananthapuram, 695019, Kerala, India M. S. Lekshmi Sundar, Aliyath Susmitha, Devi Rajan, Keerthi Sasikumar & K. Madhavan Nampoothiri Academy of Scientific and Innovative Research (AcSIR), CSIR-National Institute for Interdisciplinary Science and Technology (CSIR-NIIST), Thiruvananthapuram, 695019, Kerala, India M. S. Lekshmi Sundar, Aliyath Susmitha, Keerthi Sasikumar & K. Madhavan Nampoothiri Genetics of Prokaryotes, Faculty of Biology & CeBiTec, Bielefeld University, Bielefeld, Germany Silvin Hannibal & Volker F. Wendisch M. S. Lekshmi Sundar Aliyath Susmitha Devi Rajan Silvin Hannibal Keerthi Sasikumar Volker F. Wendisch K. Madhavan Nampoothiri LS, the first author executed majority of the work and wrote the article.SA, SH and KS contributed in the molecular biology aspects of the work while DR involved in the RSM studies. VFW helped in critical reading of manuscript. KMN, the corresponding author who conceived and designed the research and helped to prepare the manuscript. All authors read and approved the manuscript. Correspondence to K. Madhavan Nampoothiri. The authors declare that they have no conflict of interest regarding this manuscript. This article doesn't contain any studies performed with animals or humans by any of the authors. The authors declare(s) that they have no competing interests. Growth (circles) and xylose consumption (triangles) by C. glutamicum ATCC 13032 (pVWEx1-xylB) (open symbols) and C. glutamicum ATCC 31831 (pVWEx1-xylB) (closed symbols) in CGXII medium containing 60 g/L xylose. Sundar, M.S.L., Susmitha, A., Rajan, D. et al. Heterologous expression of genes for bioconversion of xylose to xylonic acid in Corynebacterium glutamicum and optimization of the bioprocess. AMB Expr 10, 68 (2020). https://doi.org/10.1186/s13568-020-01003-9 Corynebacterium glutamicum Heterologous expression Response surface methodology (RSM) Xylonic acid Xylose dehydrogenase
CommonCrawl
Structured first order conservation models for pedestrian dynamics NHM Home Global existence and asymptotic behavior of measure valued solutions to the kinetic Kuramoto--Daido model with inertia December 2013, 8(4): 969-984. doi: 10.3934/nhm.2013.8.969 Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes Giuseppe Maria Coclite 1, , Lorenzo di Ruvo 2, , Jan Ernest 3, and Siddhartha Mishra 3, Department of Mathematics, University of Bari, Via E. Orabona 4, I--70125 Bari Department of Mathematics, University of Bari, via E. Orabona 4, 70125 Bari, Italy Seminar for Applied Mathematics (SAM), ETH Zürich, HG G 57.2, Rämistrasse 101, 8092 Zürich, Switzerland, Switzerland Received October 2012 Revised April 2013 Published November 2013 Flow of two phases in a heterogeneous porous medium is modeled by a scalar conservation law with a discontinuous coefficient. As solutions of conservation laws with discontinuous coefficients depend explicitly on the underlying small scale effects, we consider a model where the relevant small scale effect is dynamic capillary pressure. We prove that the limit of vanishing dynamic capillary pressure exists and is a weak solution of the corresponding scalar conservation law with discontinuous coefficient. A robust numerical scheme for approximating the resulting limit solutions is introduced. Numerical experiments show that the scheme is able to approximate interesting solution features such as propagating non-classical shock waves as well as discontinuous standing waves efficiently. Keywords: discontinuous fluxes, Conservation laws, capillarity approximation.. Mathematics Subject Classification: Primary: 35L65; Secondary: 35L7. Citation: Giuseppe Maria Coclite, Lorenzo di Ruvo, Jan Ernest, Siddhartha Mishra. Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes. Networks & Heterogeneous Media, 2013, 8 (4) : 969-984. doi: 10.3934/nhm.2013.8.969 Adimurthi, S. Mishra and G. D. Veerappa Gowda, Optimal entropy solutions for conservation laws with discontinuous flux-functions,, J. Hyp. Diff. Eqns., 2 (2005), 783. doi: 10.1142/S0219891605000622. Google Scholar B. Andreianov, K. H. Karlsen and N. H. Risebro, A theory of $L^1$-dissipative solvers for scalar conservation laws with discontinuous flux,, Arch. Ration. Mech. Anal., 201 (2011), 27. doi: 10.1007/s00205-010-0389-4. Google Scholar E. Audusse and B. Perthame, Uniqueness for scalar conservation laws with discontinuous flux via adapted entropies,, Proc. Roy. Soc. Edinburgh Sect. A, 135 (2005), 253. doi: 10.1017/S0308210500003863. Google Scholar K. Aziz and A. Settari, Fundamentals of Petroleum Reservoir Simulation,, Applied Science Publishers, (1979). Google Scholar R. Bürger, K. H. Karlsen, N. H. Risebro and J. D. Towers, Well posedness in $BV_t$ and convergence of a difference scheme for continuous sedimentation in ideal clarifier-thickener units,, Numer. Math., 97 (2004), 25. doi: 10.1007/s00211-003-0503-8. Google Scholar G. M. Coclite and K. H. Karlsen, A singular limit problem for conservation laws related to the Camassa-Holm shallow water equation,, Comm. Partial Differential Equations, 31 (2006), 1253. doi: 10.1080/03605300600781600. Google Scholar G. M. Coclite, K. H. Karlsen, S. Mishra and N. H. Risebro, Convergence of vanishing viscosity approximations of $2\times2$ triangular systems of multi-dimensional conservation laws,, Boll. Unione Mat. Ital. (9), 2 (2009), 275. Google Scholar C. Dafermos, Hyperbolic Conservation laws in Continuum Physics,, $3^{rd}$ edition, (2005). Google Scholar E. vanDuijn, L. A. Peletier and S. Pop, A new class of entropy solutions of the Buckley-Leverett equation,, SIAM J. Math. Anal., 39 (2007), 507. doi: 10.1137/05064518X. Google Scholar T. Gimse and N. H. Risebro, Solution of the Cauchy problem for a conservation law with a discontinuous flux function,, SIAM J. Math. Anal., 23 (1992), 635. doi: 10.1137/0523032. Google Scholar R. Helmig, A. Weiss and B. I. Wohlmuth, Dynamic capillary effects in heterogeneous porous media,, Comp. Geosci., 11 (2007), 261. doi: 10.1007/s10596-007-9050-1. Google Scholar S. Hassanizadeh and W. G. Gray, Mechanics and thermodynamics of multiphase flow in porous media including interphase boundaries,, Adv. Wat. Res., 13 (1990), 169. doi: 10.1016/0309-1708(90)90040-B. Google Scholar H. Holden, K. H. Karlsen and D. Mitrovic, Zero diffusion-dispersion-smoothing limits for scalar conservation law with discontinuous flux function,, International Journal of Differential Equations, 2009 (2009), 1. Google Scholar H. Holden, K. H. Karlsen, D. Mitrovic and E. Y. Panov, Strong compactness of approximate solutions to degenerate elliptic-hyperbolic equations with discontinuous flux function,, Acta Mathematica Scientia, 29B (2009), 573. doi: 10.1016/S0252-9602(10)60004-5. Google Scholar K. H. Karlsen and F. Kissling, On the singular limit of a two-phase flow equation with heterogeneities and dynamic capillary pressure,, Z. Angew. Math. Mech., (). Google Scholar K. H. Karlsen, N. H. Risebro and J. Towers, $L^1$ stability for entropy solutions of nonlinear degenerate parabolic convection-diffusion equations with discontinuous coefficients,, Skr. K. Nor. Vidensk. Selsk., 3 (2003), 1. Google Scholar F. Kissling and C. Rohde, The computation of nonclassical shock waves with a heterogeneous multiscale method,, Netw. Heterog. Media, 5 (2010), 661. doi: 10.3934/nhm.2010.5.661. Google Scholar P. LeFloch, Hyperbolic Systems of Conservation Laws: The Theory Of Classical and Non-Classical Shock Waves,, Lecture notes in Mathematics., (2002). doi: 10.1007/978-3-0348-8150-0. Google Scholar S. Mishra and J. Jaffré, On the upstream mobility scheme for two-phase flow in porous media,, Comp. GeoSci., 14 (2010), 105. doi: 10.1007/s10596-009-9135-0. Google Scholar F. Murat, L'injection du cône positif de $H^{-1}$ dans $W^{-1,q}$ est compacte pour tout $q<2$,, J. Math. Pures Appl. (9), 60 (1981), 309. Google Scholar S. Mochon, An analysis of the traffic on highways with changing surface conditions,, Math. Model., 9 (1987), 1. doi: 10.1016/0270-0255(87)90068-6. Google Scholar E. Yu. Panov, Existence and strong pre-compactness properties for entropy solutions of a first-order quasilinear equation with discontinuous flux,, Arch. Ration. Mech. Anal., 195 (2010), 643. doi: 10.1007/s00205-009-0217-x. Google Scholar E. Yu. Panov, Erratum to: Existence and strong pre-compactness properties for entropy solutions of a first-order quasilinear equation with discontinuous flux,, Arch. Ration. Mech. Anal., 196 (2010), 1077. doi: 10.1007/s00205-010-0303-0. Google Scholar Neng Zhu, Zhengrong Liu, Fang Wang, Kun Zhao. Asymptotic dynamics of a system of conservation laws from chemotaxis. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 813-847. doi: 10.3934/dcds.2020301 Constantine M. Dafermos. A variational approach to the Riemann problem for hyperbolic conservation laws. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 185-195. doi: 10.3934/dcds.2009.23.185 Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097 Michiel Bertsch, Flavia Smarrazzo, Andrea Terracina, Alberto Tesei. Signed Radon measure-valued solutions of flux saturated scalar conservation laws. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3143-3169. doi: 10.3934/dcds.2020041 D. R. Michiel Renger, Johannes Zimmer. Orthogonality of fluxes in general nonlinear reaction networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 205-217. doi: 10.3934/dcdss.2020346 Tomáš Roubíček. Cahn-Hilliard equation with capillarity in actual deforming configurations. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 41-55. doi: 10.3934/dcdss.2020303 Zexuan Liu, Zhiyuan Sun, Jerry Zhijian Yang. A numerical study of superconvergence of the discontinuous Galerkin method by patch reconstruction. Electronic Research Archive, 2020, 28 (4) : 1487-1501. doi: 10.3934/era.2020078 Yue Feng, Yujie Liu, Ruishu Wang, Shangyou Zhang. A conforming discontinuous Galerkin finite element method on rectangular partitions. Electronic Research Archive, , () : -. doi: 10.3934/era.2020120 Bo Tan, Qinglong Zhou. Approximation properties of Lüroth expansions. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020389 Toshiko Ogiwara, Danielle Hilhorst, Hiroshi Matano. Convergence and structure theorems for order-preserving dynamical systems with mass conservation. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3883-3907. doi: 10.3934/dcds.2020129 Shin-Ichiro Ei, Shyuh-Yaur Tzeng. Spike solutions for a mass conservation reaction-diffusion system. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3357-3374. doi: 10.3934/dcds.2020049 Yifan Chen, Thomas Y. Hou. Function approximation via the subsampled Poincaré inequality. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 169-199. doi: 10.3934/dcds.2020296 Bilal Al Taki, Khawla Msheik, Jacques Sainte-Marie. On the rigid-lid approximation of shallow water Bingham. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 875-905. doi: 10.3934/dcdsb.2020146 P. K. Jha, R. Lipton. Finite element approximation of nonlocal dynamic fracture models. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1675-1710. doi: 10.3934/dcdsb.2020178 Simone Fagioli, Emanuela Radici. Opinion formation systems via deterministic particles approximation. Kinetic & Related Models, 2021, 14 (1) : 45-76. doi: 10.3934/krm.2020048 Dorothee Knees, Chiara Zanini. Existence of parameterized BV-solutions for rate-independent systems with discontinuous loads. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 121-149. doi: 10.3934/dcdss.2020332 Manuel Friedrich, Martin Kružík, Jan Valdman. Numerical approximation of von Kármán viscoelastic plates. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 299-319. doi: 10.3934/dcdss.2020322 Baoli Yin, Yang Liu, Hong Li, Zhimin Zhang. Approximation methods for the distributed order calculus using the convolution quadrature. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1447-1468. doi: 10.3934/dcdsb.2020168 Giuseppe Maria Coclite Lorenzo di Ruvo Jan Ernest Siddhartha Mishra
CommonCrawl
Journal of Mathematical Biology Using genetic data to estimate diffusion rates in heterogeneous landscapes L. Roques E. Walker P. Franck S. Soubeyrand E. K. Klein Having a precise knowledge of the dispersal ability of a population in a heterogeneous environment is of critical importance in agroecology and conservation biology as it can provide management tools to limit the effects of pests or to increase the survival of endangered species. In this paper, we propose a mechanistic-statistical method to estimate space-dependent diffusion parameters of spatially-explicit models based on stochastic differential equations, using genetic data. Dividing the total population into subpopulations corresponding to different habitat patches with known allele frequencies, the expected proportions of individuals from each subpopulation at each position is computed by solving a system of reaction–diffusion equations. Modelling the capture and genotyping of the individuals with a statistical approach, we derive a numerically tractable formula for the likelihood function associated with the diffusion parameters. In a simulated environment made of three types of regions, each associated with a different diffusion coefficient, we successfully estimate the diffusion parameters with a maximum-likelihood approach. Although higher genetic differentiation among subpopulations leads to more accurate estimations, once a certain level of differentiation has been reached, the finite size of the genotyped population becomes the limiting factor for accurate estimation. Reaction–diffusion Stochastic differential equation Inference Mechanistic-statistical model Allele frequencies Genotype measurements The online version of this article (doi: 10.1007/s00285-015-0954-4) contains supplementary material, which is available to authorized users. The research leading to these results has received funding from the French Agence Nationale pour la Recherche, within the ANR-12-AGRO-0006 PEERLESS, ANR-13-ADAP-0006 MECC and ANR-14-CE25-0013 NONLOCAL projects and from the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC Grant Agreement n.321186-ReaDi-Reaction-Diffusion Equations, Propagation and Modellings. Mathematics Subject Classification 35K45 35K57 35Q92 65C30 92D10 92D40 Supplementary material 1 (avi 375 KB) Appendix 1: gradual release of the pre-dispersal populations The Eq. (2.2) describes a simultaneous release of all the individuals at \(t=0.\) To account for a possible gradual release of the individuals, the Eq. (2.2) can be replaced by: $$\begin{aligned} \frac{\partial u}{\partial t}=\varDelta (D(x) \, u) -\frac{u}{\nu }+u_0(x)\, f(t), \ t>0, \, x\in \varOmega , \end{aligned}$$ where the term \(u_0(x)\, f(t)\) describes the release of the individuals; \(u_0(x)\) still corresponds to the pre-dispersal density and the function f(t) is the release rate. It can be described by any nonnegative function or distribution with integral 1 and with support in [0, T], T corresponding to the end of the release period. In this framework, the density of dispersers coming from habitat \(\varOmega ^h\) satisfies the equation: $$\begin{aligned} \frac{\partial u^h}{\partial t}=\varDelta (D(x) \, u^h) -\frac{u^h}{\nu }+u_0^h(x)\, f(t), \ t>0, \, x\in \varOmega , \end{aligned}$$ where \(u_0^h\) is still given by (2.7). Appendix B: precise shape of the diffusion terms In our numerical computations, we took $$\begin{aligned} \phi (x)=\mu _{2\, R}(\Vert x\Vert ) \hbox { and }\psi (x)=\psi (x_1,x_2)=\mu _{R}\left( x_1- q\right) , \end{aligned}$$ for the function \(\mu \) defined by (see Fig. 7): $$\begin{aligned} \mu _R(r)=\exp \left( \frac{-r^4}{(r^2-R^2)^2}\right) \hbox { for }r \in (-R,R) \hbox { and }\mu _R(r)=0 \hbox { otherwize}. \end{aligned}$$ The function \(\mu _R(r)\), for \(R=0.05\) and \(r\in (-0.1, 0.1)\) Appendix C: computation of the \(F_{ST}\) The index \(F_{ST}\) is used as a measure of genetic differentiation among the subpopulations. It was computed as follows: we set $$\begin{aligned} J_S=\frac{1}{\varLambda }\sum \limits _{\lambda =1}^{\varLambda } \sum \limits _{a=1}^{A} \sum \limits _{h=1}^{H} \frac{1}{H}\left( p_{h \lambda a}\right) ^2 \hbox { and } J_T=\frac{1}{\varLambda }\sum \limits _{\lambda =1}^{\varLambda }\sum \limits _{a=1}^{A} \left( \frac{1}{H} \sum \limits _{h=1}^{H} p_{h \lambda a}\right) ^2, \end{aligned}$$ where \(\varLambda \) is the number of loci, A, the number of alleles per locus whose frequency is measured and H the number of subpopulations. Here, \(J_S\) and \(J_T\) denote the mean homozygosity across subpopulations and the homozygosity of the total population, respectively. Then, we can write $$\begin{aligned} F_{ST}= \frac{J_S-J_T}{1-J_T}. \end{aligned}$$ This formula corresponds to Nei's \(G_{ST}\) for a single locus (Nei 1973), with numerator and denominator averaged over the \(\varLambda \) loci. In our computations, all the subpopulations had the same size; in other situations, the weight 1 / H in the above formulas for \(J_S\) and \(J_T\) should be replaced by the relative sizes of the subpopulations. Appendix D: numerical computation of the cumulated population densities In order to compute the cumulated densities \(w_\infty (x)\) and \(w_\infty ^h(x),\) we used the time-dependent partial differential equation solver Comsol Multiphysics\(^{\copyright }\) applied to the evolution equations (10.2) and (10.4) below at large time (\(t=20\)), with default parameter values (finite element method with second order basis elements) and a triangular mesh adapted to the geometry of our landscape and made of 5296 elements. We defined the cumulated population density at intermediate times t and position x by: $$\begin{aligned} w_t(x)=\int _0^{t}u(s,x)\, ds, \ \hbox { for all }t>0, \ x\in \varOmega . \end{aligned}$$ Integrating (2.2) between 0 and \(t>0\) we note that \(w_t(x)\) satisfies the following equation: $$\begin{aligned} \frac{\partial w_t}{\partial t}=\varDelta (D(x) \, w_t) -\frac{w_t}{\nu }+u_0(x), \ t>0, \, x\in \varOmega , \end{aligned}$$ and \(w_0(x)=0.\) Similarly, the cumulated population density of individuals coming from \(\varOmega ^h\) is: $$\begin{aligned} w_t^h(x)=\int _0^{t}u^h(s,x)\, ds, \ \hbox { for all }t>0, \ x\in \varOmega . \end{aligned}$$ This function satisfies: $$\begin{aligned} \frac{\partial w_t^h}{\partial t}=\varDelta (D(x) \, w_t^h) -\frac{w_t^h}{\nu }+u_0^h(x), \ t>0, \, x\in \varOmega , \end{aligned}$$ and \(w_0^h(x)=0.\) Appendix E: using abundance data in the inference of the diffusion rates As already mentioned, an important feature of our approach is that the likelihood does not depend on the capture rates \(\beta _\tau \). As the expected number of individuals captured in a trap \(\theta _\tau \) is proportional to \(\alpha \, \beta _\tau \), the absolute number of individuals captured in \(\theta _\tau \) cannot be used directly to infer the diffusion parameters if the capture rates are not known. However, if the capture rate was the same (\(=\beta \)) for all traps, we could include the information on the absolute number of captured individuals \(\mathbf {I}=\{I_1, \ldots ,I_J\}\) by computing the likelihood $$\begin{aligned} \begin{array}{ll} \mathcal {L}(D,\alpha \, \beta ) &{} =\mathbb {P}(\mathbf {\mathcal {G}},\mathbf {I}|D, \alpha \, \beta ) \\ &{} = \mathbb {P}(\mathbf {\mathcal {G}}|\mathbf {I},D, \alpha \, \beta ) \mathbb {P}(\mathbf {I}|D, \alpha \, \beta ), \end{array} \end{aligned}$$ where \(\mathbf {\mathcal {G}}\) is the genotype information. In our framework, the genotype information does not depend on the number of captured individuals in each trap, as we assumed a constant number of genotyped individuals per trap, G. Besides, we have shown that the quantity \(\mathbb {P}(\mathbf {\mathcal {G}}|D, \alpha \, \beta )\) was independent of \(\alpha \, \beta \). Using the assumptions of Sect. 2 on the capture process, the quantity \(\mathbb {P}(\mathbf {I}|D, \alpha \, \beta )\) can be computed explicitly: $$\begin{aligned} \mathbb {P}(\mathbf {I}|D, \alpha \, \beta )=\prod \limits _{\tau =1}^{J}\exp (-C_\tau )\frac{C_\tau ^{I_\tau }}{(I_\tau !)}. \end{aligned}$$ Finally, one can infer the diffusion parameters, together with the product \( \alpha \, \beta \) by maximising the likelihood: $$\begin{aligned} \mathcal {L}(D,\alpha \, \beta )=2^{k} \prod \limits _{\tau =1, \ldots , J} \prod \limits _{i=1, \ldots , G} \exp (-C_\tau )\frac{C_\tau ^{I_\tau }}{(I_\tau !)} \sum \limits _{h=1}^{H}\left[ \frac{C^h_\tau }{C_\tau }\prod \limits _{\lambda =1}^{\varLambda }p_{h \lambda a^1} \, p_{h \lambda a^2}\right] , \end{aligned}$$ where k is the total number of heterozygous loci in the genotyped population. For the computation of \(C_\tau \) and \(C^h_\tau ,\) the pre-dispersal density \(\alpha \) can be fixed arbitrarily to 1. Appendix F: modelling sharp transitions between regions with different diffusion rates For the sake of simplicity, we assumed in this paper that the coefficient D was a smooth function of the position x, leading to a scalar equation (2.2), with a unique classical solution. Sharp transitions could be modelled by replacing the Eq. (2.2) by a system of N equations, where N is the number of patches \(\varOmega _i\) where the diffusion coefficient takes a constant value \(D_i\) and \(u_i\) is the population density in the patch \(\varOmega _i\): $$\begin{aligned}\left\{ \begin{array}{l} \displaystyle \frac{\partial u_i}{\partial t}=D_i \, \varDelta u_i - \frac{u_i}{\nu }, \ x \in \varOmega _i, \\ \displaystyle u_i=u_j, \ x \in \partial \varOmega _i\cap \partial \varOmega _j, \\ \displaystyle D_i \, \nabla u_i \cdot \mathbf {n}_i=-D_j \, \nabla u_j \cdot \mathbf {n}_j, \ x \in \partial \varOmega _i\cap \partial \varOmega _j, \end{array}\right. \end{aligned}$$ where \(\partial \varOmega _i\) denotes the boundary of \(\varOmega _i\) and \(\mathbf {n}_i\) the outward unit normal to the boundary. The first boundary condition corresponds to the continuity of the population density in \(\varOmega =\bigcup \nolimits _{i}\varOmega _i\). The second boundary condition guaranties the conservation of mass in the absence of mortality (\(\nu =+\infty \)) and with reflecting boundary conditions on \(\partial \varOmega \). Anderson E, Thompson E (2002) A model-based method for identifying species hybrids using multilocus genetic data. Genetics 160(3):1217–1229Google Scholar Berliner LM (2003) Physical-statistical modeling in geophysics. J Geophys Res 108:8776CrossRefGoogle Scholar Bohonak AJ (1999) Dispersal, gene flow, and population structure. Q Rev Biol 1999:21-45Google Scholar Broquet T, Ray N, Petit E, Fryxell JM, Burel F (2006) Genetic isolation by distance and landscape connectivity in the American marten (martes americana). Landsc Ecol 21(6):877–889CrossRefGoogle Scholar Calderón AP (1980) On an inverse boundary value problem. In: Raupp MA, Meyer WH (eds) Seminar on numerical analysis and its applications to continuum physics. Sociedade Brasileira de Matematica, Brazil, pp 63–73Google Scholar Cantrell RS, Cosner C (2003) Spatial ecology via reaction–diffusion equations. Wiley, ChichesterzbMATHGoogle Scholar Cornuet J-M, Piry S, Luikart G, Estoup A, Solignac M (1999) New methods employing multilocus genotypes to select or exclude populations as origins of individuals. Genetics 153(4):1989–2000Google Scholar Doyle PG, Snell JL (1984) Random walks and electric networks. AMC 10:12MathSciNetzbMATHGoogle Scholar Durbin J, Koopman SJ (2012) Time series analysis by state space methods, vol 38. Oxford University Press, OxfordCrossRefzbMATHGoogle Scholar Gardiner C (2009) Stochastic methods. In: Springer series in synergetics. Springer, BerlinGoogle Scholar Gilligan CA (2008) Sustainable agriculture and plant diseases: an epidemiological perspective. Philos Trans R Soc B Biol Sci 363(1492):741–759CrossRefGoogle Scholar Graves T, Chandler RB, Royle JA, Beier P, Kendall KC (2014) Estimating landscape resistance to dispersal. Landsc Ecol 29(7):1201–1211CrossRefGoogle Scholar Graves TA, Beier P, Royle JA (2013) Current approaches using genetic distances produce poor estimates of landscape resistance to interindividual dispersal. Mol Ecol 22(15):3888–3903CrossRefGoogle Scholar Hamrick J, Trapnell DW (2011) Using population genetic analyses to understand seed dispersal patterns. Acta Oecologica 37(6):641–649CrossRefGoogle Scholar Hanski IA, Gilpin ME (1996) Metapopulation biology: ecology, genetics, and evolution. Academic Press, New YorkzbMATHGoogle Scholar Hecht F (2012) New development in freefem++. J Numer Math 20(3–4):251–265MathSciNetzbMATHGoogle Scholar Hewitt GM (2000) The genetic legacy of the quarternary ice ages. Nature 405:907–913 (22 June 2000)CrossRefGoogle Scholar Kareiva PM (1983) Local movement in herbivorous insects: applying a passive diffusion model to mark-recapture field experiments. Oecologia 57:322–327CrossRefGoogle Scholar Klein EK, Bontemps A, Oddou-Muratorio S (2013) Seed dispersal kernels estimated from genotypes of established seedlings: does density-dependent mortality matter? Meth Ecol Evol 4(11):1059–1069CrossRefGoogle Scholar Kot M, Lewis M, van den Driessche P (1996) Dispersal data and the spread of invading organisms. Ecology 77:2027–2042CrossRefGoogle Scholar Marin J, Robert CP (2007) Bayesian Core. Springer, New York, NYzbMATHGoogle Scholar McRae BH (2006) Isolation by resistance. Evolution 60(8):1551–1561CrossRefGoogle Scholar Nachman AI (1996) Global uniqueness for a two-dimensional inverse boundary value problem. Ann Math 1996:71-96Google Scholar Nei M (1973) Analysis of gene diversity in subdivided populations. Proc Natl Acad Sci 70(12):3321–3323CrossRefzbMATHGoogle Scholar Ovaskainen O, Rekola H, Meyke E, Arjas E (2008) Bayesian methods for analyzing movements in heterogeneous landscapes from mark-recapture data. Ecology 89(2):542–554CrossRefGoogle Scholar Paetkau D, Calvert W, Stirling I, Strobeck C (1995) Microsatellite analysis of population structure in Canadian polar bears. Mol Ecol 4(3):347–354CrossRefGoogle Scholar Papaïx J, Goyeau H, Du Cheyron P, Monod H, Lannou C (2011) Influence of cultivated landscape composition on variety resistance: an assessment based on wheat leaf rust epidemics. N Phytol 191(4):1095–1107CrossRefGoogle Scholar Patterson TA, Thomas L, Wilcox C, Ovaskainen O, Matthiopoulos J (2008) State-space models of individual animal movement. Trends Ecol Evol 23(2):87–94CrossRefGoogle Scholar Preisler HK, Ager AA, Johnson BK, Kie JG (2004) Modeling animal movements using stochastic differential equations. Environmetrics 15(7):643–657CrossRefGoogle Scholar Pritchard JK, Stephens M, Donnelly P (2000) Inference of population structure using multilocus genotype data. Genetics 155(2):945–959Google Scholar Rannala B, Mountain JL (1997) Detecting immigration by using multilocus genotypes. Proc Natl Acad Sci 94(17):9197–9201CrossRefGoogle Scholar Robledo-Arnuncio JJ (2012) Joint estimation of contemporary seed and pollen dispersal rates among plant populations. Mol Ecol Res 12(2):299–311CrossRefGoogle Scholar Robledo-Arnuncio JJ, Garcia C (2007) Estimation of the seed dispersal kernel from exact identification of source plants. Mol Ecol 16(23):5098–5109CrossRefGoogle Scholar Roques L (2013) Modèles de réaction-diffusion pour l'écologie spatiale. Editions QuaeGoogle Scholar Roques L, Auger-Rozenberg M-A, Roques A (2008) Modelling the impact of an invasive insect via reaction–diffusion. Math Biosci 216(1):47–55MathSciNetCrossRefzbMATHGoogle Scholar Roques L, Garnier J, Hamel F, Klein EK (2012) Allee effect promotes diversity in traveling waves of colonization. Proc Natl Acad Sci USA 109(23):8828–8833MathSciNetCrossRefGoogle Scholar Roques L, Hosono Y, Bonnefon O, Boivin T (2014) The effect of competition on the neutral intraspecific diversity of invasive species. J Math Biol. doi: 10.1007/s00285-014-0825-4 MathSciNetzbMATHGoogle Scholar Roques L, Soubeyrand S, Rousselet J (2011) A statistical-reaction–diffusion approach for analyzing expansion processes. J Theor Biol 274:43–51MathSciNetCrossRefzbMATHGoogle Scholar Rousset F (1997) Genetic differentiation and estimation of gene flow from f-statistics under isolation by distance. Genetics 145(4):1219–1228Google Scholar Shigesada N, Kawasaki K (1997) Biological invasions: theory and practice. oxford series in ecology and evolution. Oxford University Press, OxfordGoogle Scholar Slatkin M (1987) Gene flow and the geographic structure of natural populations. Science 236(4803):787–792CrossRefGoogle Scholar Smouse PE, Focardi S, Moorcroft PR, Kie JG, Forester JD, Morales JM (2010) Stochastic modelling of animal movement. Philos Trans R Soc B Biol Sci 365(1550):2201–2211CrossRefGoogle Scholar Soubeyrand S, Laine AL, Hanski I, Penttinen A (2009) Spatio-temporal structure of host-pathogen interactions in a metapopulation. Am Nat 174:308–320CrossRefGoogle Scholar Soubeyrand S, Roques L (2014) Parameter estimation for reaction–diffusion models of biological invasions. Popul Ecol 56(2):427–434CrossRefGoogle Scholar Southwood TRE, Henderson PA (2009) Ecological methods. Wiley, New YorkGoogle Scholar Sylvester J, Uhlmann G (1987) A global uniqueness theorem for an inverse boundary value problem. Ann Math 125(1):153–169MathSciNetCrossRefzbMATHGoogle Scholar Tetali P (1991) Random walks and the effective resistance of networks. J Theor Prob 4(1):101–109MathSciNetCrossRefzbMATHGoogle Scholar Turchin P (1998) Quantitative analysis of movement: measuring and modeling population redistribution in animals and plants. Sinauer, SunderlandGoogle Scholar Valdinoci E (2009) From the long jump random walk to the fractional laplacian. arXiv:0901.3261 Wikle CK (2003) Hierarchical models in environmental science. Int Stat Rev 71:181–199CrossRefzbMATHGoogle Scholar Wright S (1943) Isolation by distance. Genetics 28:114–138Google Scholar 1.INRA, UR 546 Biostatistique et Processus SpatiauxAvignonFrance 2.INRA, UR 1115 Plantes et Systèmes de Culture HorticolesAvignonFrance Roques, L., Walker, E., Franck, P. et al. J. Math. Biol. (2016) 73: 397. https://doi.org/10.1007/s00285-015-0954-4 Revised 19 November 2015
CommonCrawl
Algebraic graph calculus $\newcommand{\1}{\mathbf{1}}$ $\newcommand{\R}{\mathbf{R}}$ We describe a graph-theoretic analogue of vector calculus. The linear operators of vector calculus (gradient, divergence, laplacian) correspond to the matrices naturally associated to graphs (incidence matrix, adjacency matrix). This analogy is useful for formalizing the discretization of some problems in image and surface processing that are often defined in a continuous setting. 1. Reminder of vector calculus Vector calculus deals with functions and vector fields defined in $\R^3$. 1.1. Functions and vector fields A function (or scalar field) is a map $u:\R^3\to\R$. A vector field is a map $\mathbf{v}:\R^3\to\R^3$. Vector fields are written in bold. Let us fix some typical names for the coordinates. The coordinates of a point in $\R^3$ are written as $(x,y,z)$. If $\mathbf{v}$ is a vector field, then $\mathbf{v}=(a,b,c)$ where $a$, $b$ and $c$ are three scalar fields called the components of $\mathbf{v}$. We denote the partial derivatives of a function using subindices, for example $a_y:=\frac{\partial a}{\partial y}$. 1.2. Differential operators The gradient of a function $u$ is a vector field $\nabla u$ defined by \[ \nabla u = \left( u_x\ ,\ u_y\ ,\ u_z \right) \] The divergence of a vector field $\mathbf{u}=(a,b,c)$ is a scalar field $\mathrm{div}(\mathbf{u})$ defined by \[ \mathrm{div}(\mathbf{u}) = a_x + b_y + c_z \] The curl of a vector field $\mathbf{u}=(a,b,c)$ is another vector field $\mathrm{curl}(\mathbf{u})$ defined by \[ \mathrm{curl}(\mathbf{u}) = \left( c_y - b_z\ ,\ a_z - c_x\ ,\ b_x - a_y \right) \] Finally, the laplacian of a scalar field $u$ is the scalar field $\Delta u$ defined by \[ \Delta u = u_{xx} + u_{yy} + u_{zz}. \] Notice that, except for the curl, all these operations can be defined in $\R^N$. However, the curl is specific to three dimensions. There is a similar operator in two dimensions, which we call also the curl and computes a scalar field $\mathrm{curl}(\mathbf{u})$ from a vector field $\mathbf{u}=(a,b):\R^2\to\R^2$ \[ \mathrm{curl}(\mathbf{u}) = b_x - a_y \] Notice that it is the last component of the 3D curl. The curl is also defined in dimension 7. Let $\mathbf{u}=(u^1,\ldots,u^7)$ be a vector field in $\R^7$, then \[ \def\curlco#1#2#3#4#5#6{ {u^{#1}}_{#2}-{u^{#2}}_{#1}+ {u^{#3}}_{#4}-{u^{#4}}_{#3}+ {u^{#5}}_{#6}-{u^{#6}}_{#5} } \mathrm{curl}(\mathbf{u}) = \left( \begin{matrix} \curlco{2}{4}{3}{7}{5}{6} \\ \curlco{3}{5}{4}{1}{6}{7} \\ \curlco{4}{6}{5}{2}{7}{1} \\ \curlco{5}{7}{6}{3}{1}{2} \\ \curlco{6}{1}{7}{4}{2}{3} \\ \curlco{7}{2}{1}{5}{3}{4} \\ \curlco{1}{3}{2}{6}{4}{5} \\ \end{matrix} \right) \] where a sub-index $i$ denotes a partial derivative in the $i$-th dimension of $\R^7$. And analogously we can define the 6-dimensional curl by taking the last component (resulting in a scalar field). 1.3. Differential identities and properties The most important identity is $\Delta u = \mathrm{div}(\mathrm{grad}(u))$, that can be used also as the definition of $\Delta$. Other identities involving the curl are $\mathrm{curl}(\nabla u)=0$ and $\mathrm{div}(\mathrm{curl}(\mathbf{u}))=0$. The functions $u$ such that $\nabla u=0$ on $\R^3$ are the constants. The vector fields $\mathbf{v}$ such that $\mathrm{curl}(\mathbf{v})=0$ are called conservative, irrotational or integrable. They are of the form $\mathbf{v}=\nabla u$ for some function $u$ called the potential of $\mathbf{v}$. The vector fields $\mathbf{v}$ such that $\mathrm{div}(\mathbf{v})=0$ are called divergence-free, volume-preserving, solenoidal or incompressible. They are of the form $\mathbf{v}=\mathrm{curl}(\mathbf{u})$ for some vector field $\mathbf{u}$ called the vector potential of $\mathbf{v}$. The scalar fields $u$ such that $\Delta u=0$ are called harmonic functions. The following identities are immediate applications of the product rule for derivatives: \[ \nabla(fg) = f\nabla g + g\nabla f \] \[ \mathrm{div}(f\mathbf{g}) = f\mathrm{div}(\mathbf{g}) + \mathbf{g}\cdot\nabla f \] 1.4. Integral calculus The divergence theorem: \[ \int_\Omega \mathrm{div}(\mathbf{g}) = \int_{\partial\Omega}\mathbf{g}\cdot\mathbf{ds} \] Combining the divergence theorem with the product rule we obtain the integration by parts formula. \[ \int_{\partial\Omega} f\mathbf{g}\cdot\mathbf{ds} = \int_\Omega f\mathrm{div}(\mathbf{g}) + \int_\Omega \mathbf{g}\cdot\nabla f \] Thus, if at least one of the two functions vanishes on the boundary of $\Omega$ \[ 0= \int_\Omega f\mathrm{div}(\mathbf{g}) + \int_\Omega \mathbf{g}\cdot\nabla f \] or, in another notation \[ \left\langle f, \mathrm{div}(\mathbf{g}) \right\rangle = \left\langle -\nabla f, \mathbf{g} \right\rangle \] thus that the operators $\mathrm{div}$ and $-\nabla$ are adjoint to each other. Integrating by parts twice we obtain that the operator $\Delta$ is self-adjoint. 2. Graphs and their matrices A graph is $G=(V,E)$ where $V$ is a set called the vertices of $G$, and $E$ is a subset of $V\times V$ called the edges of $G$. We assume always that the set $V$ is finite, and its elements are numbered from $1$ to $n$. Thus, the set $E$ is also finite (the cardinal is at most $n^2$) and we assume that the elements of $E$ are numbered from $1$ to $m$. $\displaystyle\begin{matrix} V = \{1,2,3,4,5,6\} \\ E = \{ \{1,2\},\{1,3\},\{2,4\},\{3,4\},\{4,5\},\{5,6\},\{4,6\} \} \end{matrix}$ 2.1. The adjacency list Given a graph of $n$ vertices and $m$ edges, the adjacency list is a matrix of $m$ rows and $2$ columns that contains the pairs of vertices connected by each edge. The entries of this matrix are integers on the set $\{1,\ldots,n\}$. Thus, if the $k$-th row is $(i,j)$, this means that edge $k$ connects vertices $i$ to $j$. $ \textrm{adjacency list} = \begin{pmatrix} 1 & 2 \\ 1 & 3 \\ 2 & 4 \\ 3 & 4 \\ 4 & 5 \\ 5 & 6 \\ 4 & 6 \\ \end{pmatrix} $ The adjacency list is a very efficient representation for sparse graphs (where the number of edges is proportional to the number of vertices). However, it is not very interesting from the algebraic point of view. We will see in the following three other matrices that have a very rich algebraic interpretation. 2.2. The adjacency matrix $A$ Given a graph of $n$ vertices and $m$ edges, the adjacency matrix is a square matrix $A=a_{ij}$ of size $n\times n$. The entries of $A$ are zeros and ones, with $a_{ij}=1$ if there is an edge from $i$ to $j$ and $a_{ij}=0$ otherwise. $ A = \begin{array}{l|lllllll} V\backslash V & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 0 & 1 & 1 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 1 & 0 & 0 \\ 3 & 1 & 0 & 0 & 1 & 0 & 0 \\ 4 & 0 & 1 & 1 & 0 & 1 & 1 \\ 5 & 0 & 0 & 0 & 1 & 0 & 1 \\ 6 & 0 & 0 & 0 & 1 & 1 & 0 \\ \end{array} $ Notice that this matrix has somewhat less information than the adjacency list, because the ordering of the edges is lost. Thus, there is a unique way to compute the adjacency matrix from the list, but many $m!$ different ways to get the list from the matrix. We can chose an arbitrary canonical ordering of the edges (for example, in lexicographic order). 2.3. The Laplacian matrix $L$ Let $A$ be the adjacency matrix of a graph $G$. If we sum the values of all the elements of the $i$-th row, we obtain the number of edges going out of vertex $i$ (called the degree of the edge). Let us put the vector with all the degrees in the diagonal of a matrix $D$; in octave/matlab notation $\mathtt{D=diag(sum(A))}$. The Laplacian matrix of $G$ is defined as \[ L = A - \mathtt{diag}(\mathtt{sum}(A)) \] In the typical case where $A$ is symmetric with 0 on the diagonal, the matrix L is the same as A with minus the degree of each vertex on the diagonal entries. $ L = \begin{array}{l|lllllll} V\backslash V & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 &-2 & 1 & 1 & 0 & 0 & 0 \\ 2 & 1 &-2 & 0 & 1 & 0 & 0 \\ 3 & 1 & 0 &-2 & 1 & 0 & 0 \\ 4 & 0 & 1 & 1 &-4 & 1 & 1 \\ 5 & 0 & 0 & 0 & 1 &-2 & 1 \\ 6 & 0 & 0 & 0 & 1 & 1 &-2 \\ \end{array} $ 2.4. The incidence matrix $B$ Given a graph of $n$ vertices and $m$ edges, the incidence matrix is a rectangular matrix $B=b_{ij}$ of $m$ rows and $n$ columns. The entries of $B$ are zeros, ones and minus ones given by the edges of the graph: if the $k$-th edge goes from $i$ to $j$, then, on the $k$th row there are values $-1$ and $1$ on positions $i$ and $j$ respectively; there are zeros everywhere else. $ B = \begin{array}{l|lllllll} E\backslash V & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 &-1 & 1 & 0 & 0 & 0 & 0 \\ 2 &-1 & 0 & 1 & 0 & 0 & 0 \\ 3 & 0 &-1 & 0 & 1 & 0 & 0 \\ 4 & 0 & 0 &-1 & 1 & 0 & 0 \\ 5 & 0 & 0 & 0 &-1 & 1 & 0 \\ 6 & 0 & 0 & 0 & 0 &-1 & 1 \\ 7 & 0 & 0 & 0 &-1 & 0 & 1 \\ \end{array} $ Notice that the incidence matrix contains the same information as the adjacency list (including the order of the edges). There is an interesting relationship between the incidence matrix and the Laplacian matrix, that can be checked algebraically: \[ L = -B^TB \] This identity is the discrete analogue of $\Delta=\mathrm{div\ grad}$, as we will explain below. 2.5. The unsigned incidence matrix $C$ The incidence matrix $B$ defined above is signed, on each row there are two non-zero entries whose values are $-1$ and $1$. Thus the sum of any row is zero. We can write the matrix $B$ as $B=B_1-B_0$, where the matrices $B_0$ and $B_1$ have only zeros and ones, with a single non-zero entry per row. It will be useful later to consider the unsigned incidence matrix $C$, defined as $C=\frac{1}{2}(B_0 + B_1)$, or equivalently $C=\frac{1}{2}|B|$. The rows of the matrix $C$ sum to one. The following relations are immediate to verify \[ A = 2C^TC-B^TB/2 \] \[ \mathrm{deg} = 2C^TC+B^TB/2 \] where $\mathrm{deg}$ is an $n\times n$ diagonal matrix, whose values are the degrees of each vertex. 3. Vector calculus on graphs Most of the constructions that we have described on the vector calculus reminder above have a direct correspondence in the case of graphs. 3.1. Analogies The correspondence between vector calculus and graph theory is laid out in the following table. The main idea is that scalar fields correspond to functions defined on vertices, and vector fields correspond to functions defined on edges. Vector calculus Graph theory Base space Graph vertices $V$ Tangent space Graph edges $E$ $u:\Omega\to\R$ $u:V\to\R$ $\mathbf{v}:\Omega\to\R^3$ $\mathbf{v}:E\to\R$ Laplacian operator $\Delta$ Laplacian matrix $L\in\mathcal{M}_{n,n}(\R)$ gradient operator $\nabla$ incidence matrix $B\in\mathcal{M}_{m,n}(\R)$ divergence operator $\mathrm{div}$ matrix $-B^T\in\mathcal{M}_{n,m}(\R)$ $\Delta=\mathrm{div\ grad}$ $L=-B^T B$ scalar field $u$ $u\in\R^n$ vector field $\mathbf{v}$ $\mathbf{v}\in\R^m$ vector field $\nabla u$ $Bu\in\R^m$ scalar field $\Delta u$ $Lu\in\R^n$ scalar field $\mathrm{div}(\mathbf{v})$ $-B^T\mathbf{v}\in\R^n$ directional derivative $\nabla u(\mathbf{a})\cdot(\mathbf{b}-\mathbf{a})$ $\nabla u (a,b)$ $\Omega\subseteq\R^3$ $\Omega\subseteq V$ $\partial\Omega\subseteq\R^3$ $\partial\Omega\subseteq E$ , defined as $\partial\Omega=E\cap(\Omega\times\Omega^c)$ $\displaystyle\int_\Omega\mathrm{div}(\mathbf{v}) = \int_{\partial\Omega}\mathbf{v\cdot ds}$ $\displaystyle\sum_{a\in\Omega}\mathrm{div}(\mathbf{v})(a) = \sum_{e\in\partial\Omega}\mathbf{v}(e)$ Elliptic PDE $\Delta u = f$ Linear system $Lu=f$ Parabolic PDE $u_t = \Delta u$ First-order Linear ODE System $u_t=Lu$ $\textrm{div}(D\nabla u),\qquad D:\Omega\to\mathcal{M}_{3,3}(\R)$ $-B^TDBu,\qquad D\in\mathcal{M}_{m,m}$ $g\Delta u,\qquad g:\Omega\to\R$ $GLu,\qquad G\in\mathcal{M}_{n,n}$ pointwise product $u v$ Hadamard product $f\odot g$ pointwise product $u\mathbf{v}$ Hadamard product $Cf\odot g$ $\nabla fg=f\nabla g + g\nabla f$ $B(f\circ g)=Cf\odot Bg + Cg\odot Bf$ (nothing) unsigned incidence matrix $C\in\mathcal{M}_{m,n}(\R)$ The $\mathrm{curl}$ operator cannot be defined on general graphs, but it can be defined on planar graphs, and it satisfies similar identities. 3.2. The graph Laplacian The simplest operator of vector calculus is the Laplacian, transforming scalar fields into scalar fields. It is the simplest because no vector fields are involved, only scalar fields. Correspondingly, the simplest operator for graphs is also the Laplacian, transforming functions defined on vertices into functions defined on vertices. It is the simplest because no functions defined on edges are involved. Once we have chosen an ordering of the vertices, a scalar field is simply a vector $u\in\R^n$, and the Laplacian operator is defined by a square matrix of size $n\times n$. Let $G=(V,E)$ be a graph and $u:V\to\R$ be a scalar field. The Laplacian of $u$ is denoted by $\Delta u$ and is defined as the scalar field $\Delta u:V\to\R$ \[ \Delta u(a) := \sum_{(a,b)\in E} u(b)-u(a) \] Notice that the sum is performed for a fixed vertex $a$, and $b$ varies through all the neighbors of $a$ in the graph. The scalar field $u$ The scalar field $\Delta u$ Just from the definition, we can deduce several properties of the laplacian The sum of all the values of $\Delta u$ is always zero If $u(a)$ is a local maximum, then $\Delta u(a)<0$ If $u(a)$ is a local minimum, then $\Delta u(a)>0$ If $u$ is constant, then $\Delta u$ is zero If we fix an ordering of the vertices, then the scalar fields $u$ and $\Delta u$ are two vectors in $\R^n$, and the linear operator $u\mapsto\Delta u$ is given by the matrix $L=A-\mathtt{diag}(\mathtt{sum}(A))$. This follows directly by decomposing the definition of $\Delta$ into two sums: \[ \Delta u(a) = \sum_{(a,b)\in E} u(b) - \sum_{(a,b)\in E} u(a) = - u(a)\mathrm{degree}(a) +\sum_{(a,b)\in E} u(b) \] Notice that the Laplacian has a nice interpretation. If we regard the values of $u$ as a quantity distributed on the vertices of the graph, the condition $\Delta u = 0$ says that the quantity is distributed evenly, or in equilibrium: the amount of quantity at each vertex equals the average amount over its neighbours. In particular, if $u$ is constant then $\Delta u = 0$. Notice that the matrix $L$ is always singular: a constant vector is an eigenvector of eigenvalue 0. If the graph has $k$ connected components, then we have null vectors that are constant on each connected component, thus the matrix $L$ has rank $n-k$. 3.3. Graph gradient and graph divergence Recall that scalar fields are functions defined on vertices and vector fields are functions defined on edges. Thus, the gradient transforms a function defined on vertices into a function defined on edges. There is a very natural way of doing that: the value at each edge is obtained as the difference between the values at each side of the edge. The scalar field $u$ The vector field $\nabla u$ More formally, let $G=(V,E)$ be a graph and $u:V\to\R$ be a scalar field. The gradient of $u$ is the vector field $\nabla u:E\to\R$ defined by \[ \nabla u(a,b) := u(b) - u(a) \qquad \mathrm{for}\ (a,b)\in E \] The matrix of this linear map is the incidence matrix $B$ of the graph. Think of the gradient $\nabla u(a,b)$ as the directional derivative of $u$ at point $a$ in the direction of the vector from $a$ to $b$. Now let $\mathbf{v}:E\to\R$ be a vector field. The divergence of $\mathbf{v}$ is the scalar field $\mathrm{div}(\mathbf{v}):V\to\R$ defined by: \[ \mathrm{div}(\mathbf{v})(a) := \sum_{(a,b)\in E}\mathbf{v}(a,b) -\sum_{(b,a)\in E}\mathbf{v}(b,a) \qquad \mathrm{for}\ a\in V \] The matrix of this linear map is minus the transposed incidence matrix of the graph $-B^T$. Notice that the identity $\Delta=\mathrm{div\ grad}$ is trivial from the definitions above, since both sides are exactly $\sum_{(a,b)\in E}u(b)-u(a)$. Thus, $L=-B^TB$. 3.4. Graph curl We do not need curls for our application, but let us say some words about them. These graph-theoretic analogues are easier to understand when we use differential geometry instead of plain vector calculus. In that case, the discrete analogue of $k$-forms are functions defined over the $k$-cliques of the graph. Then the exterior derivative is readily built for all values of $k$, and it contains the gradient, curl and divergence as particular cases. The particularity of 3-dimensional manifolds comes from the fact that in that in that case 1-forms and 2-forms have the same dimension and can both be interpreted as ``vector fields'', thus the curl operator is defined from the exterior derivative $d:\Omega^1\to\Omega^2$. In the case of graphs, we cannot in general identify functions defined on edges to functions defined on triangles, except in one particular case: when the graph is a triangulation. In that case, there is a construction that allows to define the curl of a vector field as a vector field, by traversing the two triangles at each side of an edge. The identity $\mathrm{curl\ grad}=0$ is then the sum of 6 values that cancel pairwise, and so on. See the beautiful papers of Oliver Knill for a comprehensive coverage of this. 3.5. Graph subsets and their boundaries It is often necessary to deal with subset of graphs (for example, when we want to interpolate a function which is known only over some vertices). In order to do algebra with them, we model subsets as diagonal operators that contain the indicator function of the subset as the diagonal entries. This model is used for subsets of vertices and subsets of edges. Notations: Let $X=\{1,\ldots,n\}$ (or any finite ordered set) and $Y\subseteq X$. Let $a$ be a vector of length $n$ and $A$ a matrix of size $n\times n$ . We use the following, somewhat ambiguous, abuses of notation: $\mathrm{diag}(A)\in\R^n$: the vector with the elements on the diagonal of $A$ $\mathrm{diag}(a)\in\R^{n\times n}$: the diagonal matrix whose diagonal is $a$. $\1_Y\in\R^{n}$: the indicator vector of the subset $Y$ $Y=\mathrm{diag}(\1_Y)\in\R^{n\times n}$: the diagonal operator of $Y$ This last notation is very convenient in image processing, because it represents point-wise multiplication by a binary image as a linear operator (with the same name as the binary image). The $\mathrm{diag}$ operator has the same semantics as that of octave/matlab. Let $G=(V,E)$ be a graph with $n$ vertices and $m$ edges, and let $\Omega\subseteq V$. To avoid introducing new letters, we denote also by $\Omega=\omega_{ij}$ the $n\times n$ matrix that contains the indicator function of this set in its diagonal: $w_{ii}=1$ if $i\in V$ and $w_{jj}=0$ otherwise. Notice that the matrix $I-\Omega$ corresponds to the complementary set $\Omega^c$. We define the boundary of a subset of vertices $\Omega\subseteq V$ as the subset of edges $\partial\Omega\subseteq E$ that go from $\Omega$ to $\Omega^c$. Notice that $\partial\Omega=E\cap(\Omega\times\Omega)$ in set notation. Since $\partial\Omega$ is a subset of edges, it corresponds to a diagonal matrix, also named $\partial\Omega$, of size $m\times m$. In matrix notation we have \[\partial\Omega=\mathrm{diag}(B\mathrm{diag}(\Omega))\] where $B$ is the incidence matrix of the graph. We can also write $\displaystyle\1_{\partial\Omega}=B\1_\Omega$. 3.6. Equations on graphs Now that we have described the differential and boundary operators operator in matrix form, it is immediate to write the discrete analogues of several linear PDE. This is very beautiful because the analytic properties of the corresponding PDE are recovered by elementary linear algebra. 3.6.1. Laplace equation on the whole graph: \[ Lu=0 \] If the graph is connected, the matrix $L$ has rank $n-1$ thus its kernel is one-dimensional, corresponding to the constant solutions $u=c$. 3.6.2. Poisson equation on the whole graph, with data $f:V\to\R$: \[ Lu=f \] has a unique solution unless $f$ is constant. 3.6.3. Laplace equation on a subset $\Omega\subseteq V$, with Dirichlet boundary conditions $f:\Omega^c\to\R$: \[ \Omega Lu + (I-\Omega)(u-f)=0 \] Notice that this is written as an $n\times n$ linear system, but it has a diagonal part corresponding to the values of $u$ outside of $\Omega$. Notice also that the values of $f$ at the vertices that have no neighbors in $\Omega$ only appear in the diagonal part. The values of $f$ inside $\Omega$ do not appear at all (are cancelled out). 3.6.4. Laplace equation on a subset $\Omega\subseteq V$, with Neumann boundary conditions $g:\partial\Omega\to\R$: \[ \Omega Lu + (\partial\Omega)(\nabla u - g)=0 \] Or equivalently, by developing the boundary and gradient operators, \[ \left[\Omega L + \mathrm{diag}(B\mathrm{diag}(\Omega))B\right]u =\mathrm{diag}(B\mathrm{diag}(\Omega)) g \] or, in an alternative notation \[ (\mathrm{diag}(\1_\Omega) L + \mathrm{diag}(B\1_\Omega))B)u =\mathrm{diag}(B\1_\Omega) g \] 3.6.5. Heat equation on the whole graph with initial condition $u_0:V\to\R$: \[ \begin{cases} u_t & =Lu \\ u(0) & = u_0 \end{cases} \] This is a system of $n$ first-order linear differential equations with constant coefficients. It has a closed-form solution using the matrix exponential $u=e^{tL}u_0$. 3.6.6. Heat equation with source term $f:V\to\R$ and initial condition $u_0:V\to\R$ \[ \begin{cases} u_t & =Lu+f \\ u(0) & = u_0 \end{cases} \] It has likewise a closed-form solution $u=e^{tL}(u_0-L^{-1}f)-L^{-1}f$. Notice that $L^{-1}f$ only makes sense when $f$ is not a constant vector. 3.6.7. Other combinations are possible, and easy to deduce from the simpler cases: Poisson and Heat equation on subsets with various boundary conditions, etc. 3.7. Riemannian graph geometry The isotropic case of ``anisotropic'' diffusion in image processing is modelled by terms of the form $g\Delta u$, where $g$ is a positive-real valued function on $\Omega$. In the case of graphs, the function $g$ corresponds to a scalar field $g:V\to\R$, which we associate to a diagonal $n\times n$ matrix $\tilde g$ with the values of $g$. Then these terms become $\tilde gL u$ in the corresponding discrete model. Truly anisotropic diffusion comes from terms of the form $\mathrm{div}(D\nabla u)$, where the diffusivity $D$ is a field of positive-definite symmetric matrices defined over $\Omega$. In the case of graphs, we have a matrix $\tilde D$, which is also diagonal, but now of size $m\times m$. Then these terms become $\mathrm{div}(D\nabla u)$ in the discrete model. Or, in matrix form, $B^TDBu$. 3.8. Algebraic graph integral calculus Integral calculus can be generalized readily to graphs. Integrals are replaced by sums over a finite domain, and the various identities of integral calculus (e.g., the divergence theorem) become immediate matrix identities. Let $G=(V,E)$ be a graph with $V=\{1,\ldots,n\}$ and $E=\{1,\ldots,m\}$ Let $\Omega\subseteq V$ and let $f:V\to\R$ be a scalar field. The integral of $f$ over $\Omega$ is defined as \[ \int_\Omega f=\sum_{p\in \Omega}f(p) \] in matrix notation we have \( \int_\Omega f := \mathrm{sum}(\Omega f). \) Notice that here $f$ is a vector of length $n$, $\Omega$ is an $n\times n$ matrix, and we are computing the sum of all the components of the vector $\Omega f$ to obtain a single number. Notice that $f$ must be defined everywhere, but only the values inside $\Omega$ are used; thus, we could have defined $f$ only inside $\Omega$. An interface inside a graph is defined as a set of edges $S\subseteq E$. Given a vector field $\mathbf{v}:E\to\R$ we define the flow of $\mathbf{v}$ through $S$ \[ \int_S \mathbf{v\cdot ds} := \sum_{e\in S}\mathbf{v}(e) \] or, in matrix notation, $\int_S \mathbf{v\cdot ds}=\mathrm{sum}(\tilde S \mathbf{v})$ where $\tilde S$ is the diagonal matrix containing the indicator function of $S$. An interesting particular case happens when $S$ is the boundary of some region $\Omega$. We have seen above that the matrix $\tilde S$ is then equal to $\mathrm{diag}(B\mathrm{diag}(\Omega))$. This observation leads to the graph divergence theorem that says that \[ \int_{\partial\Omega} \mathbf{v\cdot ds} =\int_\Omega\mathrm{div}(\mathbf{v}) \] or, in matrix notation, \[ \1_\Omega\cdot(B^T\mathbf{v}) = (B\1_\Omega)\cdot\mathbf{v} \] which is exactly the same thing, written differently.
CommonCrawl
Questions on Factorials for CAT Find the highest power of 30 in 50! Answer & Solution Correct Answer: 12 30 = 2 × 3 × 5. Now 5 is the largest prime factor of 30, therefore, the powers of 5 in 50! will be less than those of 2 and 3. Therefore, there cannot be more 30s than there are 5 in 50! So we find the highest power of 5 in 50! The highest power of 5 in 50! = $\left[ \frac{50}{5} \right]+\left[ \frac{50}{25} \right]$= 10 + 2 = 12. Hence the highest power of 30 in 50! = 12 Find the number of zeroes present at the end of 100! We get a zero at the end of a number when we multiply that number by 10. So, to calculate the number of zeroes at the end of 100!, we have to find the highest power of 10 present in the number. Since 10 = 2 × 5, we have to find the highest power of 5 in 100! The highest power of 5 in 100! = $\left[ \frac{100}{5} \right]+\left[ \frac{100}{25} \right]$= 20 + 4 = 24 Therefore, the number of zeroes at the end of 100! = 24 What is the rightmost non-zero digit in 15!? Correct Answer: 8 We saw that 15! = 211× 36× 53× 72× 11 × 13. Now 23× 53 will give 103 or 3 zeroes at the end. Removing 23× 53, we will be left with 28× 36× 72× 11 × 13. Calculating units digit of each prime factor separately, the units digit of the product 28× 36× 72× 11 × 13 = units digit of 6 × 9 × 9 × 1 × 3 = 8. Therefore, rightmost non-zero digit = 8 Find the highest power of 72 in 100! 72 = 8 × 9. Therefore, we need to find the highest power of 8 and 9 in 72!. 8 = 23$\Rightarrow$ highest power of 8 in 100! = $\left[ \frac{\left[ \frac{100}{2} \right]+\left[ \frac{100}{4} \right]+\left[ \frac{100}{8} \right]+\left[ \frac{100}{16} \right]+\left[ \frac{100}{32} \right]+\left[ \frac{100}{64} \right]}{3} \right]=32$ 9 = 32$\Rightarrow$ highest power of 9 in 100!= $\left[ \frac{\left[ \frac{100}{3} \right]+\left[ \frac{100}{9} \right]+\left[ \frac{100}{27} \right]+\left[ \frac{100}{81} \right]}{2} \right]=24$ As powers of 9 are less, therefore, powers of 72 in 100! = 24 24 = 8 × 3. Therefore, we need to find the highest power of 8 and 3 in 150! 8 = 23$\Rightarrow$ highest power of 8 in 150! = $\left[ \frac{\left[ \frac{150}{2} \right]+\left[ \frac{150}{4} \right]+\left[ \frac{150}{8} \right]+\left[ \frac{150}{16} \right]+\left[ \frac{150}{32} \right]+\left[ \frac{150}{64} \right]+\left[ \frac{150}{128} \right]}{3} \right]=48$ Highest power of 3 in 150! = $\left[ \frac{150}{3} \right]+\left[ \frac{150}{9} \right]+\left[ \frac{150}{27} \right]+\left[ \frac{150}{81} \right]=72$ As the powers of 8 are less, powers of 24 in 150! = 48. What is the largest power of 2 that can divide 269!? Correct Answer: 265 $\left[ \frac { 269 } { 2 } \right] + \left[ \frac { 269 } { 2 ^ { 2 } } \right] + \left[ \frac { 269 } { 2 ^ { 3 } } \right] + \left[ \frac { 269 } { 2 ^ { 4 } } \right] + \left[ \frac { 269 } { 2 ^ { 5 } } \right] + \left[ \frac { 269 } { 2 ^ { 6 } } \right] + \left[ \frac { 269 } { 2 ^ { 7 } } \right] + \left[ \frac { 269 } { 2 ^ { 8 } } \right]$ = 134 + 67 + 33 + 16 + 8 + 4 + 2 + 1 = 265 Thus the greatest power of 2 is 265 that can divide exactly 268! How many natural numbers 'n' are there, such that 'n!' ends with exactly 30 zeroes? Option # 1 According to question, n! should have 30 zeroes in the end. If n = 100 (taking randomly keeping in view 30 is the number of zeroes] 100! has $\left\{ \frac { 100 } { 5 } + \frac { 100 } { 5 ^ { 2 } } \right\} = 24$, So n should be greater than 100. Next multiple of 5 is 105. But 105 = 5 x 21, has only one extra 5. Number of zeroes will increase by 1 only. Similarly 110, 115 and 120 also have one extra 5. Number of zeroes (from 120! to 124!) = 28. Now, the next multiple of 5 is 125 and 125 contains three 5's. So, number of zeroes will increase by 3. Number of zeroes in 125! = 28 + 3 = 31. So, there is no factorial of a number which ends with 30 zeroes n! has x number of zeroes at the end and (n + 1)! has (x + 3) zeroes at the end. 1≤n≤1000. How many solutions are possible for 'n'? We can see that increasing the natural number by 1, we are gathering 3 more powers of 5. Therefore, (n + 1) is a multiple of 125 but not a multiple of 625 as it would result in 4 powers of 5. Therefore, (n + 1) will be equal to all the multiples of 125 minus 625. Total number of multiples of 125 less than 1000 = 8 The required answer is (8 – 1) =7 The number 2006! is written in base 22. How many zeroes are there at the end? [1] 450 The number of zeroes present at the end of 2006! in base 22 will be equal to the number of times 22 divides 2006! completely. Therefore, we need to find the highest power of 22 contained in 2006! 22 = 2 x 11. As 11 is the largest prime factor of 22. We will find the highest power of 11 contained in 2006! Therefore, our answer has to be $[ 2006 / 11 ] + \left[ 2006 / 11 ^ { 2 } \right] + \left[ 2006 / 11 ^ { 3 } \right] = 199$
CommonCrawl
All issues Volume 80 / No 1 (January-February 2000) Lait, 80 1 (2000) 121-127 Abstract Volume 80, Number 1, January-February 2000 New applications of membrane technology in the dairy industry https://doi.org/10.1051/lait:2000113 DOI: 10.1051/lait:2000113 Shear separation: a promising method for protein fractionation Mark F. Hurwitz , John D. Brantley Process Equipment Development Department, Pall Corporation, Cortland, New York, 13045, USA 10 Deer Drive, Sound Beach NY, 11789, USA Shear Separation is a method of separating macromolecules suspended in a fluid by inducing hydrodynamic forces that depend on size. The forces are produced in the laminar sub-layer of a turbulent shear boundary layer adjacent to a porous membrane. The suspended particles are lifted away from the porous membrane, against the drag of the permeate flow, by the viscous part of the shear stress. We have found that the transmission rate of a protein through the membrane depends on the shear rate, the permeate rate, the protein size, and the protein shape, even when the pore size of the membrane is substantially larger than the largest of the proteins. As a result, this is a size and shape dependent separation of the macromolecules. The membrane functions as a surface upon which an extremely high shear laminar sub-layer at the membrane surface may be generated. The effect has been demonstrated on the bench scale with clean mixtures of cytochrome C and bovine serum albumin. The separation can be observed with membranes rated approximately molecular weight cut off (MWCO) or less. Using this technique, milk serum containing BSA and smaller proteins without fats, caseins or immunoglobulins has been produced from skimmed milk. In addition, the milk serum has been fractionated with transmission fractions of 18% for BSA, 33% for -lactoglobulin, and 62% for -lactalbumin. shear separation / fractionation / dairy / ultra-filtration / protein Correspondence and reprints: M.-F. Hurwitz [email protected] Copyright INRA, EDP Sciences Prepurification of $\alpha$-lactalbumin with ultrafiltration ceramic membranes from acid casein whey: study of operating conditions Purification of $\alpha$-lactalbumin from a prepurified acid whey: Ultrafiltration or precipitation Hydrodynamic factors affecting flux and fouling during ultrafiltration of skimmed milk Positively charged and bipolar layered poly(ether imide) nanofiltration membranes for water softening applications Use of nanofiltration membranes for the desalting of peptide fractions from whey protein enzymatic hydrolysates
CommonCrawl
Community evolution in patent networks: technological change and network dynamics Yuan Gao ORCID: orcid.org/0000-0003-3505-93041, Zhen Zhu2, Raja Kali4 & Massimo Riccaboni1,3 Applied Network Science volume 3, Article number: 26 (2018) Cite this article When studying patent data as a way to understand innovation and technological change, the conventional indicators might fall short, and categorizing technologies based on the existing classification systems used by patent authorities could cause inaccuracy and misclassification, as shown in literature. Gao et al. (International Workshop on Complex Networks and their Applications, 2017) have established a method to analyze patent classes of similar technologies as network communities. In this paper, we adopt the stabilized Louvain method for network community detection to improve consistency and stability. Incorporating the overlapping community mapping algorithm, we also develop a new method to identify the central nodes based on the temporal evolution of the network structure and track the changes of communities over time. A case study of Germany's patent data is used to demonstrate and verify the application of the method and the results. Compared to the non-network metrics and conventional network measures, we offer a heuristic approach with a dynamic view and more stable results. Patent data has attracted the interest of researchers as a way to measure and understand innovation and technological change, especially with the increased availability of online electronic database and the efforts made by worldwide patent authorities to consolidate and harmonize patent data at international level (Maraut et al. 2008; OECD 2009). Gao et al. (2017) have introduced an approach to construct networks based on the OECD Triadic Patent Family database (Dernis and Khan 2004), to identify communities and the community cores. The comparison against the International Patent Classification (IPC) system (WIPO 2017a; 2017b) shows that the endogenous communities can provide a more accurate and complete list of potentially associated IPC classes for any given patent class. This association is indicated by being the most consistent nodes in the community containing the given node, as measured by an indicator named coreness. However, that approach was unable to effectively capture the temporal evolution of a community over time due to the difficulty in community tracking. This paper continues to address this unsolved problem. For community identification, we use an improved Louvain modularity optimization algorithm. To define community cores, we have developed a heuristic approach to detect the central groups of nodes based on the intrinsic characteristics of the temporal networks. As for community tracking, we use a method to find the "best match" based on majority nodes mapping to the reference community. Verification and robustness checks show that our findings are sound and reliable. We also present a case study to demonstrate the real-world implications of our results. Since the Sixties patent data has been used by many researchers to measure patent quality, their economic value and possible impact on technological developments and economy (Griliches and Schmookler 1963; Comanor and Scherer 1969; Griliches 1998; Squicciarini et al. 2013; Hausman and Johnston 2014). Most of the well-recognized conventional indicators are straightforward measures, such as the number of patent applications and publications, time needed from filing to grant (grant lag), number of different technology classification codes involved (patent scope), forward and backward citation counts, etc. Such indicators may be used to track technological changes and innovation, but when considered alone, will fall short due to their simplicity and lack of context, resulting in bias and sometimes contradicting conclusions (Benner and Waldfogel 2008; Dang and Motohashi 2015; Hall et al. 2001; Hall and et al. 2005;Harhoff et al. 2003). In the light of this, we carried out the previous research (Gao et al. 2017) to study patent data from a network perspective (Acemoglu et al. 2016), which lays the foundation for the motivation of this paper. More specifically, two types of networks are constructed based on how individual patents grouped into the same family, and how patents in different families cite each other. In both networks, the nodes are the 4-digit subclass level IPC codes following WIPO's IPC scheme of 2016 (WIPO 2016). This paper focuses on the former type, the family cohort network, in which any two of the total of 639 nodes are connected when they are both found in patents of the same patent family. The more times two subclasses nodes are found to share the same family, the more intense they are linked in the network. Based on this construction mechanism, a community of closely connected nodes indicates that the represented technological fields are more likely to be found in the same inventions. For example, pharmaceutical products in IPC class A61 and enzymology or microbiology in class C12 frequently co-occur in patent families and they are found to be in the same network community. Application inventions usually involve more than one technology field. A car, for example, consists of many parts serving different functions. Innovations in molecular material science could stimulate the birth of a new type of tire, or a more efficient type of fuel, which then brings a new design of engines involving mechanical and electronic innovations. Along the technological trajectories there are many cases like this. To find out how an established community of technological changes over time, splitting up and merging with other technologies, is not only interesting in the retrospective observation of technological development trends, but also helps in understanding the interactions between science and technology and policy making, market drives and other socio-economic factors. The dataset used for the analysis is retrieved from the February, 2016 edition OECD patent database (OECD 2017). In addition, ISO country codes from the OECD REGPAT database (Maraut et al. 2008) are used to sort out patent families by country. In this paper, a patent family from a country is defined as a family containing at least one patent of which at least one applicant is from that country. The applicant's country is used instead of the inventor's country because the applicants designate the owners or party in control of the invention, mostly firms (OECD 2009). Therefore it reflects the innovative performance of the given country's firms, while the inventor's country is usually the inventor's professional address. The REGPAT database is most reliable for OECD and EU countries since it is based on two sources: patent applications to the European Patent Office (EPO) and filed under the Patent Co-operation Treaty (PCT) from 1977 to 2013. We chose to focus on Germany for our case study and we use the data from year 1980–2013 for more consistent data quality. Germany has the largest number of patent applications among all the EU countries, and ranks third for patent production among the OECD countries. The analysis mainly consists of three parts, to be described in the following paragraphs: community identification, central nodes identification and community tracking over time. Community identification In our previous study (Gao et al. 2017), we used the Lumped Markov Chain method proposed by Carlo Piccardi (2011) to detect clusters in networks. This method produces satisfying results for a single static network with sufficiently strong clustering structure. However, for our purpose to analyze the temporal evolution of a network, essentially a network in multiple time slices, this method would treat each time slice as a separate network without connection to each other, which is not appropriate for the continuous technological development issue of interest. Also, the marginal results observed show that the detected community structure is very sensitive to the input network. In other words, although the network is not supposed to have dramatic change from one snapshot in time to the next, a small change could cause significant transformation in the resulting communities. To better capture the network's temporal properties and overcome the instability, we use a modification of the Louvain modularity optimization method for community detection. This modification, namely the Stabilized Louvain Method, proposed by Aynaud and Guillaume (2010), has been proved to achieve more stable results in tracing communities over time. The Louvain method finds the community structure with maximum modularity by looking for modularity gain through iterations (Blondel et al. 2008). The modification, essentially, is to change the initial partition of the network at time t to the detected partition at time t-1, thus the initial partition is constrained to take into account the communities found at the previous time steps, making it possible to identify the real trends. The algorithm implementation is based on the Python module using NetworkX for community detection (Aynaud 2009). We split up the database by the earliest priority year of patent family, and execute the algorithm for each year, using the detected network partition as the initial partition for the next year. Central nodes identification There are many different ways to define centrality within a community and/or a network, from the classic definitions by degree, betweenness, closeness, Eigenvector, PageRank, etc, to many customized concepts in empirical and theoretical researches (Freeman 1978;Wasserman and Faust 1994;Valente et al. 2008). For example, in our previous work (Gao et al. 2017) "Coreness" has been defined as a measure of weighted centrality, based on the probability to be present in the community and the intra-community centrality of each node. However, similar to community detection, centrality measures are not designed for temporal evolving networks and the adoption of one metric out of the others is usually an ad-hoc choice. A more heuristic concept of cores, as defined by Seifi and colleagues, is certain sets of nodes that different community detection algorithms or multiple execution of a non-deterministic algorithm would agree on (Seifi et al. 2013). They summarized that for a static network, there are two types of algorithms to identify such sets of nodes: by adding perturbations to the network, and by changing the initial configuration. In the first type, small perturbations such as removing a fraction of links and putting them back on random pairs of nodes, are used to create slightly different networks from the original and produce different partitioning results for comparison and finding of the consensus communities. However, for a network that changes over time, such perturbations naturally exist in each time slice. In fact, they are the temporal changes to be discovered. Therefore, the latter type is more appropriate. Wang and Fleury experimented with the overlapping community technique in a series of works (Wang 2012;Wang and Fleury 2010;2013). Our method is similar to the concept of Wang and Fleury's fuzzy detection method to identify modular overlaps, which are groups of nodes or sub-communities shared by several communities (Wang 2012), with a different implementation. We describe the overlapping community mapping algorithm and the central nodes identification methods in a 4-step procedure: I. Given the network partition P in the reference time slice t, identify a community C (C ∈P). C is the target community of interest to be mapped to in the following time slices. P is obtained using the Stabilized Louvain Method described in the previous subsection. For a network with the total set of nodes E, P = {C1,C2,...C k}, where: $$\bigcup_{i} {C_{i}} = {E}, \\ i \neq j \Rightarrow {C_{i}} \cap {C_{j}} = 0 $$ II. In the network with partition P' of a following time slice t', find the community with the most nodes in C, and that is the mapped community C' of C. The change of C from t to t' is considered the change between C and C'. This step can be illustrated by the pseudo codes in Algorithm 1. III. Based on the communities detected in the previous step, take any node k, find the community C0 it belongs to in the initial year T0 in a certain time window of n years, and use the mapping algorithm to track C0 in the following years within the time window. VI. The more significant this node k is, the more likely it is to be found in the mapped communities. Each node will have a number W k (W k ≤n) of how many times it is included in the mapped communities throughout the time window. The group of nodes with the largest W k will become the central sets in this time window. Step III and VI can be illustrated by the pseudo codes in Algorithm 2. This method uses the intrinsic temporal dynamics of the network to find the central nodes. It is intuitive and heuristic, independent of arbitrary ad-hoc choices of measures. The configuration of the initial year and the length of time window could significantly affect the results. Therefore, robustness checks using different lengths of rolling time windows are necessary to verify stability. Community tracking over time After sets of central nodes are identified, it is then possible to track the community containing them through the years. The tracking method is the same as the mapping algorithm described above. Visualization helps to show that the central nodes are the persistent "cores" of the community under tracking whereas the "peripheral" nodes reflect the changes over time. Using the 3-step method described above, we perform a case study using data of patent families with Germany as the applicant's country. As the largest economy of the EU, Germany also ranks top among all the EU countries in terms of IP filings, including patent applications. Data from the World Intellectual Property Organization (WIPO) (statistics database W 2017) shows that 176,693 patents have been filed to Germany's patent office in 2016 from residents and abroad, more than twice of 71,276 from France, the second place in EU. WIPO's statistics also reports that the top 5 fields of technology associated with patent applications are transport; electrical machinery, apparatus; mechanical elements; engines, pumps, turbines; and measurement. Analysis configuration In our method, there are several adjustable parameters: Community Detection Resolution. In the first step, the Louvain method allows for different resolution settings, an implementation of the idea raised by Lambiotte and colleagues that time plays the role of an intrinsic parameter to uncover community structures at different resolutions (Lambiotte et al. 2008). To test the influence of resolution, we run community detection using different resolutions ranging from 0.5 to 2. Overlapping Community Reference. In the second step, there are two ways to choose the reference year: For any year T t of the non-initial years in the time window, always refer to the initial year T0, or refer to the previous year T t−1. The latter would mediate the dependency of the initial year. We have applied both types of time referencing and compared the results. Time Window Setting. As mentioned in the previous section, the initial year's network partition is used as the reference for the following years' community mapping. The time window length is important for two reasons: first, depending on the pace of technology development and potential events driving the changes, the period of time that the initial year would remain valid as the reference varies; and second, longer time windows would require a node to be more "central" to appear at all time or most of the time, and therefore would result in smaller sets of central nodes than shorter time windows. To address these concerns, we used different rolling window settings, including 5 or 10-year time windows with the initial year rolling from year to year (for example, 1980–1989, 1981–1990, …), and 5 or 10-year time windows with the initial year rolling 5 years apart (for example, 1980–1989, 1985–1994, …). Community detection - quantities and sizes: For community detection, we apply the stabilized Louvain method on the entire time range from 1980 to 2013 because technological development is continuous through all the years. We first check the number of communities detected at different resolution levels. As each node represents a subclass in the IPC scheme, not all of them would appear in every year's patent applications. In addition, some patent families contain just a single subclass. Such cases would result in "orphan communities", communities that have only one node without connection to any other nodes. There are also some very small communities with 2 or 3 nodes. Additional file 1: Figure S8 shows the community structure of selected years with resolution set to 1.0, including all the small communities and orphans with nodes layout using Fruchterman-Reingold force-directed algorithm (Hagberg et al. 2008;Fruchterman and Reingold 1991). Each sample year has an average of 151 orphan nodes plus 7 nodes in small communities with no more than 5 nodes. So many isolated nodes and small communities will cause too much noise in the analysis. To focus on the meaningful clusters, we have excluded all the communities with 5 nodes or less from the detected partitions. Quantities of the remaining communities are shown in Fig. 1. Number of communities at different resolutions. The x-axis indicates years from 1980 to 2013, and the y-axis indicates number of communities Contrary to the common wisdom that higher resolutions correspond to finer, and therefore more partitions, the figure shows that after excluding the very small communities, the lowest resolution 0.5 has the most communities in all the years, and resolutions 1.8 and 2.0 have the fewest. Figure 1 also shows that the community numbers generally have a decreasing trend over the years. This is due to the mechanism of the stabilized algorithm where each year's initial partition builds on the previous year. With the enhanced stability, it becomes easier to identify clusters with time. It is noteworthy that the decrease of number of non-tiny communities over time does not indicate the breakdown of weakly connected communities, but rather community merging, including the situation where a community splits into 2 or more smaller parts which merge into other large communities. Likewise, one should be aware that the disappearance of a portion of nodes in a community does not mean such nodes abruptly disconnect from the central nodes of the community. They are most likely still connected, but have become more closely connected with another set of central nodes, or are replaced by other nodes that are closer to the original central nodes. The methodology of cluster identification involves such "competition" at all times. We also check the community sizes. Figure 2 shows the average number of nodes in community for all the years at different resolutions. Overall, the community size increases with resolution, and from the earlier years to the more recent years. Average community size at different resolutions. The x-axis indicates years from 1980 to 2013, and the y-axis indicates number of nodes in the community The first-step results show that although the algorithm detects more, finer communities under higher resolutions, a lot of them are very small communities. As a result, at the higher resolutions the community size distribution tends to be more polarized, with fewer but more aggregated communities, and more tiny communities than at the lower resolutions. Central nodes - occurring rate: Similarly, different resolutions would result in different sets of central nodes. We define an indicator named "occurring rate" as the number of occurrence of each node in the mapped communities, divided by the total number of years in the time window. For a year-to-year rolling time window setting, the average occurring rate of all the nodes over a certain time window is calculated as $$ O_{r} = \frac{\sum_{t=1}^{34-N+1}\left(\frac{\sum_{i=1}^{m}\left(\frac{n_{{ir}}}{N}\right)}{m}\right)}{34-N+1}, r\in\{0.2, 0.5, 1.0, 1.2, 1.5, 1.8, 2.0\}, m\leq 639 $$ where m is the total number of nodes in year t after excluding those very small communities with less than 5 nodes; n ir is the occurrence of the i th node at resolution r in each time window during the community mapping process including the initial year; and N is the length of the time window. We use the configuration of 10-year windows rolling from year to year to demonstrate this result. When N is equal to 10 in Eq. 1, the calculated mean values and standard deviations of the occurring rates at various resolutions are shown in Fig. 3. Using both mapping algorithms, the lowest average occurring rates are at resolution 1.0. Occurring rate statistics at different resolutions. The mean value and standard deviation of the occurring rate over all the nodes, using the 10-year time window rolling from year to year, including 25 time windows with initial years from 1980 to 2004. The x-axis indicates various resolution values, and the y-axis is the scale of mean values. Results from two mapping methods are shown in this figure While there is no benchmark for the absolutely ground truth to determine which resolution is the "best", for our analysis purpose there are some preferred qualities: lower average occurring rates are more desirable because such community structures can better reflect the changes over time: Figs. 1 and 2 show that the higher resolutions generate fewer and larger communities, which indicates that the community sizes tend to polarize at higher resolutions, with fewer large communities and more tiny communities, or even disconnected single-node communities. At lower resolutions, the number of communities larger than five increases, which might also bring more instability (the number of distinct communities decreases from 13 to 8 at resolution 0.5). Therefore, we choose resolution 1.0 as the setting for the next step, to identify communities and track them over time. The statistical behavior shown above under different resolutions is related to the problem known as "resolution limit" (Fortunato and Barthelemy 2007), that the modularity optimization method may fail to identify communities smaller than a certain scale. Lambiotte and colleagues have also verified in their framework that partitions beyond a certain resolution limit are obtained at small time where the optimal partition is the finest (Lambiotte et al. 2008). Central nodes at resolution 1.0: Using the overlapping algorithm, at resolution 1.0, we select different time window configurations to identify the central nodes, each with the two referencing methods described above. Figure 4 shows the central nodes plotting under the 10-year time window setting, rolling from year to year. The threshold of the central nodes is set to be the length of the time windows (34 for the all-year setting and 10 for the rolling windows). That is, only the most persistent nodes with an occurring rate of one within the time window are colored in the figure. So under the all-year setting there are fewer central nodes. If a node is central using both referencing methods (colored green), it is more likely that the initial community has not gone through significant reshuffling. If it is only central when referring to the initial year (colored red), then in at least one of the following years in the time window, the initial community has probably experienced some changes that are not in a consistent direction. For example, when merging and then splitting, by referring to the previous year a node might be left out in the minority part of the merged community. If a node is central only when referring to the previous year, it is likely that it has just drift away from the initial community during accumulated changes. For some nodes, they would become red first, and then turn to green. This means the changes have stabilized. Central nodes of 10-year time window rolling from year to year. The x-axis indicates time windows from 1980–1989 to 2004–2013, except for the first column labeled "ALL", which is the all-year condition. The y-axis indicates the nodes, i.e. IPC subclasses, ordered in IPC index. Colored blocks indicate central nodes in a time window using at least one referencing method: Green represents central nodes both methods have in common, red for those central only by referring to the initial year, and yellow for those central only by referring to the previous year Figure 4 shows several noteworthy trends, highlighted as framed areas 1–4. However, at this moment it is too soon to relate these signals with real-world facts since it is not yet clear how central nodes are grouped into different communities. At this stage, the visualization provides a guidance for the potential trends to take a closer look at. Overall, it also shows the most persistent central nodes, such as IPC Class C07-C08 (organic chemistry and organic macromolecular compounds), and H03-H04 (electric circuitry and electric communication technique). Community tracking: At this step, any chosen community in the initial year can be tracked to analyze its changes over time. We use two examples to illustrate our approach. Since the endogenous communities do not have meaningful names, we refer to them by one of the representative central nodes they contain in year 1980's partition: B01D, defined in IPC as "separation in physical or chemical processes"; and B60R, "vehicles, vehicle fittings, or vehicle parts" not provided for in other categories under class B60, "vehicles in general". Figures 5 and 6 show the results of tracking the two communities above, respectively. Both communities cover multiple IPC sections, as discussed by Gao et al. (2017). The two figures show that the consistently overlapping parts of the two communities are different most of the time. B01D's community mainly consists of various physical or chemical processes treating materials and tooling (classes B01-B06), artificial materials from glass to cement and ceramics (C02-C03), petrol and gas industries (C10), and metallurgy and metal surface treatment (C21-C23). Such a composition suggests the application of physical or chemical processing techniques in the inventions of certain industries. For B60R, its community covers the majority of Section B, E and classes F01-F17, a combination of machinery, mechanical engineering, vehicles and transportation, building and construction. Relating to Fig. 4, the central nodes of these two communities contribute to a majority of the central nodes, including the framed areas 1 and 2. This is consistent with the WIPO statistics about Germany's top technology fields of patent applications (for the complete IPC definitions, please refer to WIPO's IPC Scheme (WIPO 2016)). Tracking community B01D in consecutive 5-year time windows, mapping to the previous year. The x-axis indicates years from 1980 to 2013, and the y-axis indicates the nodes, i.e. IPC subclasses, ordered in IPC index. The community mapping is based on consecutive 5-year windows. In each time window, the initial year's community containing the central node set represented by B01D is shown in blue. In the rest 4 years, colored nodes represent the mapping communities: red indicates the node does not exist in the reference community (community of the previous year), and purple indicates the overlapping part Tracking community B60R in consecutive 5-year time windows, mapping to the previous year. The x-axis indicates years from 1980 to 2013, and the y-axis indicates the nodes, i.e. IPC subclasses, ordered in IPC index. The community mapping is based on consecutive 5-year windows. In each time window, the initial year's community containing the central node set represented by B60R is shown in blue. In the rest 4 years, colored nodes represent the mapping communities: red indicates the node does not exist in the reference community (community of the previous year), and purple indicates the overlapping part Next, we focus on the major differences between the two figures. From 1990 to 1999, B21-B30 "moves" from Fig. 6 to Fig. 5. Those subclasses focus on technologies related to metal working, machine tools, and hand tools, which are likely to be applied in both communities. The temporary "move" turns back after 1999. This is an example of marginal clustering. Another similar case is the "move" of classes F22-F25 from Figs. 5 to 6 from 2000 to 2009. This part represents technologies related to combustion process, heating and refrigeration. After robustness check, the "moves" still exist. This indicates that instead of an artifact due to time window configuration, the "moving" technologies are closely connected to both communities and the network clustering algorithm captures the changes in the relative connectivity. These two "moving" parts also provide an explanation to the framed areas 1 and 3 in Fig. 4: the temporary community switches may result in the rise and fall of a set of central nodes in the following or preceding rolling time windows. In Fig. 6, we should also notice the spread to Section G and H starting from the 1990s. This is a consistent trend, getting stronger in the last 4 years. Compared to Fig. 5 which also covers a part of Section G, the community containing B60R incorporates more technologies in digital computer (class G06), electric devices and power supply and distribution (class H01 and H02). This observation is in line with WIPO's report of electrical machinery as the second top technology field of patent applications (statistics database W 2017). Technological change in Germany's automotive industry: To make sense of data analysis findings based on real-world technological trends is always difficult. In most empirical analysis, reliable methods and domain knowledge in the industry are both essential. In Fig. 6, the technology community containing B60R takes up more than half of Germany's patent filing activities, with the most persistent parts being IPC classes B60-B67, Section E, and F1-F16. These technologies can be considered as the mainstream of this community: vehicles and transportation, building and construction, machine and engines (for the details of these IPC schemes, please refer to Table 1). Table 1 IPC schemes of the persistent technologies in community containing B60R Clear changing trends can also be observed. Aside from the marginal "moves" like B21-B23 as discussed above, we focus on the more consistent trends, such as the increasing involvement of Section G and H, specifically, classes G01, G05, G06, H01 and H02, shown in the bottom framed area in Fig. 6. These are the technologies related to measuring and testing, controlling and regulating, computing, electric elements, and electric power. This trend started from 2000, and became significantly stronger since 2010 (for the relevant IPC schemes of the classes and the subordinate central subclasses, please refer to Table 2). Table 2 IPC schemes of the central nodes in Section G and H in community containing B60R Germany's dominating industrial sectors include automotive, machinery and equipment, electrical and electronic, and chemical engineering. These sectors not only contribute to the national GDP, but also are the focal points of innovation of this country. Among the top ten German organizations filing the most PCT patents, at least 6 have automotive as its major or one of the major operations, including vehicle manufacturers like Continental Automotive GMBH and Audi AG, automotive components and assembly suppliers like Robert Bosch Corporation and Schaeffler Technologies AG & Co. KG, and research institutes like the Fraunhofer Society (statistics database W 2017). Germany Trade & Invest (GTAI), the economic development agency of the Federal Republic of Germany reported that internal combustion engine energy efficiency, alternative drive technologies (including electric, hybrid, and fuel cell cars), and adapting lightweight materials and electronics are the current major market trends (GTAI 2017). From electronic technologies, software solutions to metallurgy, chemical engineering, automation and drive technologies, innovation in the automotive industry drives and benefits from a number of other sectors. In fact, these trends in the automotive sector are not limited to Germany, but Germany's case is more noticeable and representative given its outstanding concentration of R&D, design, supply, manufacturing and assembly facilities. The automotive industry does not just source from other sectors for innovative technological support. When Enkel and Gassmann examined 25 cases of cross-industry innovation, automotive is observed as both the result and source of the original idea (Enkel and Gassmann 2010). The interactive sectors range from the ones with a closer cognitive distance like aviation and steel industry to the more distant ones like sports, medical care and games. These cases all occurred between 2005 to 2009, and indeed, the cross-industry technological interactions have become more dynamic starting from 2000, as the shuffles observed in Figs. 4, 5 and 6 of our analysis. In 2009, Germany's Federal Ministry for Environment, Nature Conservation, Building and Nuclear Safety issued German Federal Government's National Electromobility Development Plan (Bundesregierung 2009) specified a serial action plan to promote electromobility in Germany, which defines 2009 to 2011 for market preparation, 2011 to 2016 as market escalation and 2017 to 2020 as mass market. The first stage focuses on research and development. The Plan also identifies batteries as the weakness of Germany's automotive sector on the path to the leading position in electromobility. The increased activities in Section G and H starting from 2000 might be a reflection of this policy. However, this is up to validation when more data covering the following years will become available. Robustness check: For community tracking, we have performed the analysis under 10-year and 5-year time window settings, and found the results to be very close. The results presented in Figs. 5 and 6 are based on 5-year time windows. In addition, we have done robustness checks using the other community mapping method and with time window shifts, shown in Additional file 2: Figure S9 and Additional file 3: Figure S10 respectively using the example of B60R's community. In Additional file 2: Figure S9, when referring to the initial year, the colored blocks layout is the same as Fig. 6 except for the colors used, which is merely due to the difference in the definition in the mapping methods. We find similar results in Additional file 3: Figure S10: There is no difference from Fig. 6 except for the 1-year shift. We have performed such robustness checks for other communities and obtained the similar results. This indicates that the community mapping method is stable and consistent in identifying central nodes and tracking communities. Central nodes identification methods comparison: Alternative to the community mapping and central nodes identification method, we try to rank nodes by their betweenness centrality. Betweenness centrality is one of the most widely used measures of vertex centrality in a network (Bavelas 1948;Beauchamp 1965;Freeman 1977). Compared to other centrality measures using degree or closeness, betweenness represents the connectivity of a node as a bridge connecting two other nodes along a shortest path. We use it as an example to demonstrate the similarity and difference between our method and the conventional network centrality measures. The betweenness centrality of each node is calculated to find the nodes with the highest centrality values. In order to avoid outstanding impact from a single year, we use aggregated data from 3 consecutive years to form a network, based on which the centrality is calculated for the first of the 3 years. We present here the results comparison for the same years from 1980 to 2004. As a major difference between the two algorithms, ours provides a set of central nodes all with the same occurring rate of 1, but the betweenness centrality value of most nodes are different, ranking them from high to low. So when using the betweenness centrality method, we take the M nodes ranking highest by centrality values, with M being the size of central nodes set in the same time period using our method. For example, the central nodes set in the time windows starting with 1980 has 131 nodes, and the top 131 nodes with highest betweenness centrality rankings in the aggregated period of 1980–1982 are used for comparison. The matching rates are shown in Table 3, averaging at 32.45%. Figure 7 shows the distribution over IPC scheme using both methods. The central nodes based on our algorithm are the ones shared by both referring methods. Central Nodes Distribution by Comparing the Community Mapping Method with the Betweenness Centrality Method. The x-axis indicates years from 1980 to 2004, and the y-axis indicates the nodes, i.e. IPC subclasses, ordered in IPC index. The community mapping is based on consecutive 10-year windows. Betweenness centrality values are calculated on 3-year aggregation period started with the same labeled year as the other method. Colored blocks indicate the central nodes in common and different between the two method, following the legend Table 3 Matching rates of central nodes by the community mapping method and the betweenness centrality method The two algorithms are different by definition, and offer different information as the comparison shows. It is difficult to verify the results against ground truth, but we argue that our method has two important advantages. First, there is no arbitrary control of the number of central nodes. To study the interaction of technologies in cohesive families, to have a set of central nodes rather than a given number of top centrality nodes is intuitively closer to the real-world situation. Second, our method identifies the set of central nodes based on tracked communities over a time window, while the betweenness centrality calculated is for a single time period (3 years in the demonstrated example) - additional efforts are needed to track communities over time in order to calculate the centrality values for continuous time periods. It would only be more inaccurate to simply aggregate data in a time window of 10 years and calculate the centrality. These issues stand true for all other centrality measure. Figure 7 also shows the central nodes identified using our proposed method are more consistent and concentrated, while the top betweenness centrality nodes are more spread out over the whole IPC scheme. Similarities between the two results also confirm the persistent and changing trends shown in Fig. 4: bio-technology in agriculture and food (A01, A21, A23), chemical technology in medical science and pharmaceutics (A61), material separation and other processing (B01), machine tools (B23), Vehicles and transport (B60-B65), organic chemistry (C7-C9), biochemistry (C12), engine technology (F1-F16), physics measuring, testing, computing and controlling (G01, G05 and G06) and electronic technology (H01-H04) are more persistent. And increasing centrality is found with B21-B23, B60-B61,C12-C13, H03-H04. Comparison with multislice community detection and tracking method: To study networks that evolve over time, another methodology is to treat the changing network as slices at different points in time based on quality functions. Mucha et al. (2010) proposed a method to generalize the problem of network community structure detection using interslice coupling adjacency matrices consisting of coupling parameters between nodes in different slices. The generalized algorithm offers flexible configurations for both the resolution parameters as we used in the Louvain modularity clustering algorithm, and the interslice coupling parameter indicating connection among slices under Laplacian dynamics. This solution is applicable to the multiplex community detection task we have. To compare the results, we applied the algorithm proposed by Mucha et al. in the same 5-year time windows as shown in Figs. 6 and 7, with the same resolution set to 1.0 and the coupling parameter as 1.0. and then find the communities containing subclasses B01D and B60R, respectively. The algorithm also obtains clusters based on modularity optimization, and generates a considerable amount of very small communities. Same for the orphan nodes. Therefore, communities with 5 nodes or less are also excluded in the results for comparison, as shown in Additional file 4: Figure S11 and Additional file 5: Figure S12. Comparing Additional file 4: Figure S11 with Fig. 5, and Additional file 5: Figure S12 with Fig. 6, obvious similarities can be observed. The "move" of B21-B30 from 1990 to 1999 is not shown for B01D. But from 1995 to 1999, most nodes in this section drop out for B06R, although they did not "move" to the community containing B01D. It verifies the marginality of this section, that they tend to have close connection to several different communities. As mentioned before, it's hard to determine the result of which algorithm is closer to the truth. Each method has its unique properties. The algorithm by Mucha et al. has the advantage of providing an overall picture of all the communities and their changes over time, but we have found that as the continuous time period increases, the number of clusters detected will decrease, which reduces the sensitivity to changes. When applied on shorter time period, the 2 methods have 2 steps in common: communities identification and tracking. For the first step, we argue that our method has higher stability and consistency given the Stabilized Louvain Method. Additionally, our method is capable to find the central nodes of a community, which is meaningful in the situation of this study. Comparison with conventional patent metrics: Compared to the simpler, more straightforward metric used in conventional patent data analysis, the network approach is more complicated and costs more computational resources. However, we propose the network method for its advantage in studying the structure of an inter-connected system. In Gao et al. (2017), the authors showed that ranking nodes by their connections with a given "key" subclass produced different results than the network clustering method, although largely similar. In the network perspective, nodes are clustered based on their relative proximity instead of the absolute counts or frequencies. Consider the situation where a node k is connected to nodes in 2 clusters A and B, where A has more nodes than B and therefore gives N more occurrence/connections. A simple measure will put k as a key node in A, but the network algorithm might attribute k to B if there are other nodes in A with even stronger connections to each other. Secondly, some structural changes may be anticipated by economic historians and policy makers as results from known actions or decisions, but they don't usually roll out as expected, with likely differences in timing or extent. As compared to traditional methods, our approach is better suited to detect structural change and paradigmatic shifts in the technological landscape. Through the three-step procedure, we demonstrated a way to improve community detection for temporal evolving networks, and more importantly, to track the community changes over time. Using Germany as a case study, we have verified this procedure by combining industry literature and robustness checks. Methodologically, our method contributes to the literature of temporal networks analysis with a new approach. Comparisons with conventional methods have helped to prove its validity and advantages. In terms of application, it is the first of such in patent data analysis. Although the subject of interest here is technological evolution, we expect the proposed approach to become a powerful tool for studying similar systems. Limitations and future work We focus our analysis on selected technological fields. Neither Fig. 4 nor Fig. 7 distinguishes the central nodes by communities. It is because the communities are not exogenously defined, and to track all the communities requires selection of a node in each community in the initial year. In fact, none of the methods discussed can show how all the communities change over time in one picture with satisfying accuracy, sensitivity and stability. Our method is more efficient in showing which nodes are the most central and investigating the evolution of the community containing certain technologies of interest. Given that the method utilizes the information embedded in the network changes, it can be generalized for other temporal networks studies. However, the result verification still requires more work due to the reasons mentioned in the Discussion section. A next step in our research is the application in other countries or regions to expose the method to a more comprehensive check. This will also provide an opportunity to study how various factors, including policy decisions, market trends, economic growths, national or regional resources, human resources, government and business investment, would interact with technological exploration. Acemoglu, D, Akcigit U, Kerr WR (2016) Innovation network. Proc Natl Acad Sci 113(41):11,483–11,488. Aynaud, T (2009) Community detection for NetworkX's documentation. https://python-louvain.readthedocs.io/en/latest/. Accessed 22 Feb 2018. Aynaud, T, Guillaume JL (2010) Static community detection algorithms for evolving networks In: Modeling and optimization in mobile, ad hoc and wireless networks (WiOpt), 2010 proceedings of the 8th international symposium on, 513–519.. IEEE, Avignon. Bavelas, A (1948) A mathematical model for group structures. Hum Organ 7(3):16–30. Beauchamp, MA (1965) An improved index of centrality. Syst Res Behav Sci 10(2):161–163. Benner, M, Waldfogel J (2008) Close to you? bias and precision in patent-based measures of technological proximity. Res Policy 37(9):1556–1567. Blondel, VD, Guillaume JL, Lambiotte R, Lefebvre E (2008) Fast unfolding of communities in large networks. J Stat Mech: Theory Exp 10:10,008. Bundesregierung, D (2009) German Federal Government's National Electromobility Development Plan. Development. Comanor, WS, Scherer FM (1969) Patent statistics as a measure of technical change. J Polit Econ 77(3):392–398. Dang, J, Motohashi K (2015) Patent statistics: A good indicator for innovation in China? Patent subsidy program impacts on patent quality. China Econ Rev 35:137–155. statistics database W (2017) Statistical Country Profiles_Germany. http://www.wipo.int/ipstats/en/statistics/country_profile/profile.jsp?code=DE. Accessed 22 Feb 2018. Dernis, H, Khan M (2004) Triadic Patent Families Methodology. http://doi.org/10.1787/443844125004. Accessed 22 Feb 2018. Enkel, E, Gassmann O (2010) Creative imitation: exploring the case of cross-industry innovation. R&d Management 40(3):256–270. Fortunato, S, Barthelemy M (2007) Resolution limit in community detection. Proc Natl Acad Sci 104(1):36–41. Freeman, LC (1977) A set of measures of centrality based on betweenness. Sociometry:35–41. Freeman, LC (1978) Centrality in social networks conceptual clarification. Soc Networks 1(3):215–239. Fruchterman, TM, Reingold EM (1991) Graph drawing by force-directed placement. Softw: Pract Experience 21(11):1129–1164. Gao, Y, Zhu Z, Riccaboni M (2017) Consistency and Trends of Technological Innovations: A Network Approach to the International Patent Classification Data In: International Workshop on Complex Networks and their Applications, 744–756.. Springer. Griliches, Z (1998) Patent statistics as economic indicators: a survey In: R&D and productivity: the econometric evidence, 287–343.. University of Chicago Press. Griliches, Z, Schmookler J (1963) Inventing and maximizing. Am Econ Rev 53(4):725–729. GTAI (2017) GTAI - Automotive Industry. http://www.gtai.de/GTAI/Navigation/EN/Invest/Industries/Mobility/automotive,t. market-trends,did=X248004.html. Accessed 22 Feb 2018. Hagberg, A, Swart P, Chult DS (2008) Exploring network structure, dynamics, and function using networkx. Tech. rep., Los Alamos National Lab.(LANL), Los Alamos. Hall, BH, Jaffe AB, Trajtenberg M (2001) The NBER patent citation data file: Lessons, insights and methodological tools. Tech rep. NBER Working Paper No. 8498. Hall, BH, et al (2005) A note on the bias in herfindahl-type measures based on count data In: REVUE D ECONOMIE INDUSTRIELLE-PARIS-EDITIONS TECHNIQUES ET ECONOMIQUES- 110. Revue d'Economie Industrielle, 149th ed.. Editions Techniques et Économiques, Paris. Harhoff, D, Scherer FM, Vopel K (2003) Citations, family size, opposition and the value of patent rights. Res Policy 32(8):1343–1363. Hausman, A, Johnston WJ (2014) The role of innovation in driving the economy: Lessons from the global financial crisis. J Bus Res 67(1):2720–2726. Lambiotte, R, Delvenne JC, Barahona M (2008) Laplacian dynamics and multiscale modular structure in networks. IEEE Trans Netw Sci Eng 1(2):76–90. 2015. Maraut, S, Dernis H, Webb C, Spiezia V, Guellec D (2008) The OECD REGPAT database: a presentation. STI Working Paper 2008/2. https://doi.org/10.1787/241437144144. Mucha, PJ, Richardson T, Macon K, Porter MA, Onnela JP (2010) Community structure in time-dependent, multiscale, and multiplex networks. Science 328(5980):876–878. ADS MathSciNet Article MATH Google Scholar OECD (2009) OECD Patent Statistics Manual. 1st edn. OECD PUBLICATIONS, France. http://www.oecd-ilibrary.org/science-and-technology/oecd-patent-statistics-manual_9789264056442-en. Accessed 22 Feb 2018. OECD (2017) OECD patent databases - OECD. http://www.oecd.org/sti/inno/oecdpatentdatabases.htm. Piccardi, C (2011) Finding and testing network communities by lumped Markov chains. PLoS ONE 6:11. https://doi.org/10.1371/journal.pone.0027028. Seifi, M, Junier I, Rouquier JB, Iskrov S, Guillaume JL (2013) Stable community cores in complex networks. Springer. Squicciarini, M, Dernis H, Criscuolo C (2013) Measuring Patent Quality: Indicators of Technological and Economic Value. OECD Sci, Technol Ind Work Pap 70(03). http://www.oecd-ilibrary.org/science-and-technology/measuring-patent-quality. 5k4522wkw1r8-en. Accessed 22 Feb 2018. Valente, TW, Coronges K, Lakon C, Costenbader E (2008) How correlated are network centrality measures? Connections (Toronto, Ont) 28(1):16. Wang, Q (2012) Overlapping community detection in dynamic networks (Doctoral dissertation. Ecole normale supérieure de lyon-ENS LYON). Wang, Q, Fleury E (2010) Mining time-dependent communities. LAWDN-Latin-American Workshop on Dynamic Networks. pp 4–p. Wang, Q, Fleury E (2013) Overlapping community structure and modular overlaps in complex networks. Mining Social Networks and Security Informatics. Springer, Dordrecht. pp 15–40. Wasserman, S, Faust K (1994) Social network analysis: Methods and applications. vol 8. Cambridge university press. WIPO (2016) IPC 2016.01. http://www.wipo.int/classifications/ipc/en/ITsupport/Version20160101/. Accessed 22 Feb 2018. WIPO (2017a) About the International Patent Classification. http://www.wipo.int/classifications/ipc/en/preface.html. Accessed 22 Feb 2018. WIPO (2017b) Guide to the International Patent Classification. http://www.wipo.int/export/sites/www/classifications/ipc/en/guide/guide_ipc.pdf. Accessed 22 Feb 2018. The authors declare that there is no funding received for the research reported. The datasets generated and/or analysed during the current study are available in the OECD patent-related database repository (OECD 2017) upon request via an online form to the OECD/STI Micro-data Lab. Further information can be found in this page: http://www.oecd.org/sti/inno/intellectual-property-statistics-and-analysis.htm. IMT School for Advanced Studies Lucca, Piazza San Francesco 19, Lucca, 55100, Italy Yuan Gao & Massimo Riccaboni Department of International Business & Economics, University of Greenwich, Park Row, London, SE10 9LS, UK Zhen Zhu Department of Managerial Economics, Strategy and Innovation, Katholieke Universiteit Leuven, Oude Markt 13, Leuven, 3000, Belgium Massimo Riccaboni Department of Economics, University of Arkansas, University of Arkansas, Fayetteville, 72701, AR, USA Raja Kali Yuan Gao Under the advising of ZZ, MR and RK, the student YG prepared the dataset used for this analysis, implemented the algorithm coding, and performed the analysis. RK contributed to the design of time window configuration, MR proposed the overlapping community mapping method, and ZZ contributed to the proposal of central node identification method. The manuscript is mainly drafted by YG. All authors read and approved the manuscript. Correspondence to Yuan Gao. Additional file 1 Community structures of the individual sample years based on Louvain modularity optimization algorithm, with resolution of 1.0. Major communities with more than 5 nodes are in the center, with different colors indicating each unique community, surrounded by small communities with 5 nodes or less in white color. (PNG 256 kb) Tracking community B60R in consecutive 5-year time windows, mapping to the initial year. This figure differs from Fig. 6 that the overlapping community mapping reference is the initial year of each time window, using the same color coding definitions as Fig. 6. (PNG 312 kb) Tracking community B60R in consecutive 5-year time windows, starting from 1981, mapping to the previous year. This figure differs from Fig. 6 that the all the time windows are shifted 1 year forward, using the same color coding definitions as Fig. 6. (PNG 316 kb) Communities containing B01D in consecutive 5-year time windows (starting from 1980–1984) based on the multislice community detection and tracking method. Nodes in blue color are in the same community with B01D in each year. (PNG 257 kb) Communities containing B60R in consecutive 5-year time windows (starting from 1980–1984) based on the multislice community detection and tracking method. Nodes in blue color are in the same community with B60R in each year. (PNG 275 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Gao, Y., Zhu, Z., Kali, R. et al. Community evolution in patent networks: technological change and network dynamics. Appl Netw Sci 3, 26 (2018). https://doi.org/10.1007/s41109-018-0090-3 Received: 22 February 2018 Temporal networks Patent data Louvain community detection method Overlapping community mapping Special Issue of the 6th International Conference on Complex Networks and Their Applications
CommonCrawl
Boundedness of nonlinear continuous functionals Asked 11 years ago Let $K$ be the closed unit ball of $C[0,1]$, and let $f$ in $C(K,\mathbb{\, R})$. Is it true that there exists an infinite dimensional reflexive subspace $E$ of $C[0,1]$ s.t. $f(K\cap E)$ is bounded ? If the answer is affirmative, this would be a very weak kind of Weierstrass-type theorem [and also a very general one, due to the "universality" of $C[0,1]$ (i.e., the Banach-Mazur Embedding Theorem)]. One may also replace $C[0,1]$ by $B[0,1]$, the space of all bounded functions on $[0,1]$, endowed with the sup-norm. fa.functional-analysis calculus-of-variations banach-spaces AdyAdy Ady, I think there is a counterexample to your question. To describe it, let $(V_n)$ be a basis of $[0,1]$ consisting of non-empy open sets; $K$ stands for the closed unit ball of $C[0,1]$. For every $n$ let $C_n$ be the closure of $V_n$ and define $U_n={g \in K: \min{|g(t)|:t \in C_n} > \|g\| - 1/4}$ where $\|g\|=\sup{|g(t)|:t \in [0,1]}$. The family $(U_n)$ is an open cover of $K$. Let $(F_m)$ be a partition of unity subordinate to $(U_n)$. For every m let $n_m$ be the least integer $n$ such that $\sup(F_m)={g \in K: F_m(g)>0}$ is contained in $U_n$. Now define $F:K\to \mathbb{R}$ by $F(g)= \sum_{m=1}^{\infty} n_m\cdot F_m(g)$ Notice that F is well-defined and continuous. Finally notice that $F(K\cap E)$ is unbounded for every infinite-dimensional subspace E of $C[0,1]$. This follows from the following fact: for every integer i and every infinte-dimensional subspace $E$ of $C[0,1]$ there is a norm-one vector $e \in E$ such $e$ is NOT in $U_n$ for every $n < i$ (and therefore, if $m$ is such that $F_m(e)>0$, then necessarily $n_m$ is greater or equal to $i$ which gives that $F(e)$ is also greater or equal to $i$). $\begingroup$ Why "for every integer i and every infinte-dimensional subspace E of C[0,1] there is a norm-one vector e in E such e is NOT in U_n for every n < i" ? $\endgroup$ – Ady Dec 29 '09 at 13:19 $\begingroup$ For every n < i fix some t_n in V_n. Since E is infinite-dimensional you can find a vector e in E such that e has norm one and satisfies e(t_n) < 1/5 for all n < i [indeed, you may select a basic sequence (e_k) in E with basis constant, say, 2 and such that lim_k e_k(t_n) exists for all n < i; so, if k is large enough you have that e_k(t_n)-e_{k+1}(t_n) is almost zero for all n < i; so, for a sufficiently large k, the vector e=(e_k-e_{k+1})/\|e_k-e_{k+1}\| is as desired]. By definition, the vector e is not in U_n for all n < i. $\endgroup$ – Anonymous Dec 29 '09 at 16:32 $\begingroup$ Nice construction, Anon. A simpler answer to Ady's question about getting e in E s.t. ... is that the functions in C[0,1] that vanish at t_n for all n<i is a finite codimensional subspace of C[0,1]. $\endgroup$ – Bill Johnson Dec 29 '09 at 17:49 Ady, I don't have an answer to the new version of your question but let me make some remarks which might be useful. The new version is about non-linear real-valued continuous functions on $\ell_\infty(\Gamma)$ where $\Gamma$ has the cardinality of the continuum. This can be slightly generalized as follows: Let $\kappa$ be an infinite cardinal and set $K$ to be the closed unit ball of $\ell_\infty(\kappa)$. Let $f:K\to\mathbb{R}$ be a continuous map. Does there exist an infinite-dimensional subspace $E$ of $\ell_\infty(\kappa)$ such that $f(K\cap E)$ is bounded? If $\kappa=\aleph_0$, then a counterexample can be constructed. On the other hand, if $\kappa$ is a measurable cardinal, then there exists a subspace $E$ of $\ell_\infty(\kappa)$ which is isomorphic to $c_0(\kappa)$ and such that $f(K\cap E)$ is bounded. The argument goes back to Ketonen. Let $FIN(\kappa)$ be the set of all non-empty finite subsets of $kappa$ and define a coloring $c:FIN(\kappa)\to\mathbb{N}$ as follows. Let $c(F)$ be $n$ if $n$ is the least integer $m$ such that $ \max\{ |f(x)|: x\in span\{e_t: t\in F\} and x\in K \} \leq m $ where $e_t$ is the dirac function at $t$. Notice that $c$ is well-defined. There exist $n_0\in\mathbb{N}$ and a subset $A$ of $\kappa$ with $|A|=\kappa$ and such that $c$ is constant on $FIN(A)$ and equal to $n_0$. If we set $E$ to be the closed linear span of $\{e_t: t\in A\}$, then $E$ is isomorphic to $c_0(\kappa)$ and $F(K\cap E)$ is in the interval $[-n_0, n_0]$. Concerning the continuum: it might be that there are set-theoretic issues. Firstly, let me recall that it is consistent that the the continuum is real-valued measurable (R. M. Solovay). On the other hand, if CH holds, then there is heavy (and quite advanced) machinery for ``killing" various Ramsey properties on $\omega_1$ (largely due to S. Todorcevic). A quick remark: there exists a non-linear continuous map $f:K\to\mathbb{R}$, where $K$ is the closed unit ball of $c_0(\kappa)$ and $\kappa$ is the continuum, such that for every infinite-dimensional subspace $E$ of $c_0(\kappa)$ the set $f(K\cap E)$ is unbounded. Pandelis DodosPandelis Dodos There is a simpler counterexample for the $C[0,1]$ case. Namely, $f(x):=$ $\log\left(1-\left\Vert x\right\Vert _{\infty}+\left\Vert Vx\right\Vert _{\infty}\right)$ ,where $V$ is the classical Volterra operator acting on $C[0,1]$. $\begingroup$ All you need is a one to one strictly singular operator from $X$ into some space. There isn't one when $X$ is $c_0(\kappa)$ with $\kappa$ uncountable, but Pandelis took care of those spaces. It certainly looks like the answer to your question is negative for every infinite dimensional space. $\endgroup$ – Bill Johnson Feb 13 '10 at 21:15 Firstly, let me give the details for $\ell_\infty(\aleph_0)$; $K$ stands for the closed unit ball of $\ell_\infty(\aleph_0)$. For every $n$ let $U_n=\{ x\in K: |x(n)| > 1/4 - \|x\| \}$. The family $(U_n)$ is an open cover of $K$. Let $(F_m)$ be a partition of unity subordinate to $(U_n)$. For every $m$ let $n_m$ be the least integer $m$ such that $supp(F_m)$ is contained in $U_n$ and define $$F(x)=\sum_m n_m \cdot F_m(x)$$. Then using the arguments outlined above, one can show that $F(K\cap E)$ is unbounded for every infinite-dimensional subspace $E$ of $\ell_\infty(\aleph_0)$. Secondly, let me remark that my argument for $\ell_\infty(\kappa)$ with $\kappa$ measurable is not correct; I apologize for that (I have a remark at the end). What I can show is that for every $\kappa$ (even measurable) there exists a continuous function $F:K_0\to\mathbb{R}$, where $K_0$ is the closed unit ball of $c_0(\kappa)$, such that $F(K_0\cap E)$ is unbounded for every infinite-dimensional subspace $E$ of $c_0(\kappa)$. The argument is a variation of the previous one. For every pair of rationals $0 < a < b < 1/4$ let $U_{a,b}$ be the set of all $x\in c_0(\kappa)$ such that for every $t\in\kappa$ either $|x(t)| < a$ or $|x(t)| > b$. Notice that $U_{a,b}$ is open in $K_0$ and for every $x\in K_0$ there exists such a pair $(a,b)$ such that $x\in U_{a,b}$. Now for every $n$ (including zero) and every pair $0 < a < b < 1/4$ let $U_{a,b,n}$ be the set of all $x\in U_{a,b}$ for which the cardinality of the set $\{t: |x(t)| > b\}$ is $n$. The family $(U_{a,b,n})$ is an open cover of $K_0$. Let $(F_i) (i\in I)$ be a partition of unity subordinate to a locally finite refinement of $(U_a,b,n)$. For every $i\in I$ set $L_i=\{n: there exist 0 < a < b < 1/4 s.t. supp(F_i) is contained in U_{a,b,n}\}$ and let $n_i$ be the least element of $L_i$. Now define $F:K_0\to\mathbb{R}$ by $$F(x)=\sum_i n_i \cdot F_i(x)$$. It is continuous. Now we check that $F(K_0\cap E)$ is unbounded for every infinite-dimensional subspace $E$ of $c_0(\kappa)$. So let $E$ be one. Since $c_0(\kappa)$ is hereditarily $c_0$, by James, we can find a normalized sequence $(e_n)$ in $E$ which a $2$-equivalent to the standard unit vector basis of $c_0$ (in particular, $(e_n)$ is weakly null). Fix some integer $M$. We may recursively select a sequence $(n_k)$ in $\mathbb{N}$ such that for all $k < m$ the sets $\{t: |e_{n_k}(t)| > 1/4M\}$ and $\{t: |e_{n_m}(t)| > 1/4M\}$ are disjoint. Consider that vector $e= \sum_{k=1}^M e_{n_k}$. Observe, first, that $1/2\leq \|e\| \leq 2$. Also notice that the set $\{t: |e(t)|\geq 3/4\}$ has cardinality at least $M$. Let us normalize $e$ and denote the normalized vector by $v$. The set $\{ t: |v(t)| \geq 3/8 \}$ has cardinality at least $M$. Let $i\in I$ be such that $F_i(v)>0$. Let $0 < a < b < 1/4$ and $n$ be arbitrary such that $supp(F_i)$ is contained in $U_{a,b,n}$. Notice that the set $\{t: |v(t)| \geq 3/8\}$ is contained in the set $\{t: |v(t)|> b\}$, and so, the cardinality of the set $\{t: |v(t)| > b\}$ is at least $M$. It follows that $n_i\geq M$ yielding that $F(v)\geq M$. $\begingroup$ @Pandelis Why it [i.e., the "colouring" argument] is not correct ? At least, do yo have an affirmative result for some "big" spaces ? $\endgroup$ – Ady Feb 7 '10 at 15:07 $\begingroup$ Ady, if you do the coloring argument then you will "canonize" the function $F$, i.e. $F(x)$ will only depend of the size of the support of $x$. I thought that this was enough to "freeze" the values of $F$ on a large set and this was wrong. What the coloring argument yielded was a hint in order to get the counterexample above. For "bigger" spaces I honestly don't know. $\endgroup$ – Pandelis Dodos Feb 7 '10 at 19:11 Not the answer you're looking for? Browse other questions tagged fa.functional-analysis calculus-of-variations banach-spaces or ask your own question. Borsuk pairs of Banach spaces separability of a certain space of continuous functions, II Weak topology of WOT Weak convergence in the intersection of Lebesgue spaces or Sobolev spaces Are bounded sets always weakly metrizable in reflexive separable spaces? A criterion for norming sets Criterion of reflexivity Criterion of reflexivity 2
CommonCrawl
Falls and risk factors of falls for urban and rural community-dwelling older adults in China Li Zhang ORCID: orcid.org/0000-0001-5843-65501, Zhihong Ding2, Liya Qiu3 & An Li1 Falls among older people have become a public health concern due to serious health consequences. Despite abundant literature on falls in older people, little is known about the rural-urban differentials in falls among older people in China. This research fills the voids of prior literature by investigating falls and the associated risk factors among Chinese seniors, with a particular focus on the rural-urban differences. Data are from the 2010 wave of Chinese Longitudinal Survey on Urban and Rural Elderly. The analysis includes 16,393 respondents aged 65 and over, with 8440 and 7953 of them living in urban and rural areas, respectively. Descriptive analyses are performed to examine incidence, locations, circumstances and consequences of falls in older people. Regression analysis is used to investigate the effects of risk factors on falls among older people in urban and rural China. The incidence of falls is higher among rural than urban older people. In both settings, older people are more likely to fell outside of home. But common outdoor falls among rural and urban older people differ in terms of locations and circumstances. Urban older people are more likely to report falling on the road whereas their rural counterparts have experienced more falls in the yard. Falls occurring within homes or immediate home surroundings are also common; but few falls occurred in public areas. The rate of hospitalization of urban seniors after falling is higher than that of rural ones. Most risk factors of falls show similar than different effects on rural and urban elders' risks of falling. Incidence, locations, circumstances and consequences of falls vary among Chinese rural and urban older people. But most risk factors for falls show similar effects on rural and urban elders' odds of falling. Implications drawn from this research provide suggestions for the government and local agencies to develop suitable fall prevention strategies which may well be applicable to other countries. Falls among older people have become a major public health problem in many countries. According to the World Health Organization's estimation, there are nearly 424,000 fatal fall incidents each year [1]. It is estimated that approximately 30.0% of people aged 65 years and older have experienced a fall, and about half of them experienced recurrent falls [2]. About one third of community-dwelling people aged 65 and over in the United States experience falls each year, with about 10.0% of falls resulting in serious injuries [3]. Older people who had a fall are also more likely to experience serious complications, resulting in death within the same year 50.0% of the time [4]. Data from the 2014 National Injury Surveillance System (NISS) revealed that in China, for a total of 77,779 accidental injuries among people aged 60 and over, 52.8% of them were caused by falls [5]. Thus far, accidental injuries have become the fourth leading causes of death among Chinese older people; and falls are indeed the major cause of elders' accidental injuries in China [6]. Senior falls among world countries (including the U.S. and China) have become a serious public health problem [7]. Since the consequences of falls and injuries may result in disability, loss of independence, fear of falling, social isolation, functional decline and mortality [3], exploring patterns of falls among Chinese older people and the associated risk factors is warranted. According to the 2010 Chinese Census data, by 2010, Chinese population aged 60 and over has reached 178 million, which is about 8.9% of the total Chinese population. Among them, population aged 80 and over was 21 million, which counts 1.57% of the overall Chinese population. Considering such a large population of seniors, it is very important to explore factors that may cause accidental falls that deteriorate older people's health. A clearer understanding of how personal characteristics, health condition, behavioral and lifestyle factors influence fall rates in Chinese populations is essential for elucidating fall prevention strategies in China. Effective senior fall prevention strategies found in China may well be applicable to other countries, such as Japan, South Korea and the like. The Chinese experience may also be beneficial to some less developed countries as the living condition of rural China resembles that in other developing countries. Given a huge gap that has long been existing in rural and urban China, investigating the rural-urban differentials in falls among Chinese older people becomes the major concern of this research. The definition of rural and urban residents is based on the household registration status of the respondent. Since the establishment of socialist China, the Chinese government has developed the household registration system to control internal mobility of people moving from the countryside to urban areas. The household registration status is based on the rural or urban residence of citizens. Before the 1980s, residents whose household registrations were in rural areas were not allowed to migrate to cities. The household registration system therefore has served as an "internal passport" to prevent people moving from villages to cities. Such a residential registration system has played an important role in population management and social control, also helped to maintain socioeconomic inequalities. Big gaps exist between rural and urban citizens, with regard to benefits and entitlements. After the 1980s, under the social context of China changed from planned economy to market-oriented economy, have a less restrictive control on rural-to-urban mobility. This is because a tremendous amount of rural labors were demanded by non-government owned sectors in cities. Consequently, a huge number of rural citizens moved to cities. But those migrants often suffered stigmatization and deprivation in various aspects, including job seeking, workplace benefits, and access to medical and other public services. Urban residents in China generally reported higher educational attainments and income, better health, longer life expectancy and more favorable living condition as compared to their rural counterparts [8,9,10,11,12]. Medical resources, including hospitals and medical personnel are readily available in most cities, but not in many areas of the countryside. In addition, although medical insurance in China has expanded rapidly since the turn of the century, reimbursement rates are generally much lower in rural areas [13, 14]. The health disparities and disadvantages in medical insurance for rural residents may have caused different consequences of falls in rural and urban China. This is because unavailability of medical resources in rural areas may have prevented elders from seeking medical treatments immediately after falls. As a result, clinic visits, hospitalization and long-term treatment rates after falls in rural and urban areas differ. The living environments for rural and urban older people are also dissimilar. For example, most rural residents live in more spacious one-story apartments whereas a large proportion of urbanities squeeze in high-rise buildings. Drastically diverse environmental factors could be a reason for dissimilar fall incidence rates among elders residing in cities and the countryside. The cultural tradition, leisure and physical activities among rural and urban seniors are also unalike. Living arrangement patterns in two spheres are divergent as well. There is a higher proportion of urban elders living in institutions or living alone than that of their rural counterparts [15, 16]. Prior research found that having adult children around and being taken care of by adult children reduced Chinese elders' probability of falling [17]. Thus, the variation in living arrangement patterns could cause diverse senior fall incidence rates in rural and urban China. Considering the rural and urban divide presented in previous literature, this study hypothesizes that the rural-urban gaps discussed above could have resulted in patterns and determinants of falls vary among rural and urban seniors. Before describing patterns of falls among Chinese older people, the paper will first review fall risk factors that have been revealed in previous studies, which provides a guideline for this current analysis. An overview of fall risk factors Falls among older peple have diverse and complex causes. One line of research has grouped fall risk factors among elders into intrinsic and extrinsic categories. The intrinsic risk factors include previous history of falls, cognitive and functional impairment, poor vision, very old age (80 and above), arthritis of knees, poor balance while standing, turning, changing position or walking, use of assistive devices, comorbidities (depression, stroke, Parkinson's disease, postural hypotension, arthritis), weak hand grip strength, motor weakness (e.g., difficulty in standing up from a chair), gait impairment and medications (the use of hypnotic, anti-depressants or tranquillizers and the use of four or more prescribed drugs) [18,19,20]. Having disabilities in activities of daily living (ADL) has also been documented to be linked to a higher risk of falling among elders [21]. The extrinsic risk factors are mostly environmental ones that relate to living condition of older people. According to prior analyses, home hazards have been found to be a major factor that resulted in senior falls [22]. Generally, the greater the number of risk factors in an older adult, the higher the risk of falling. It has been reported that the risk of falling increases from approximately 10.0% for those with none or one risk factor to approximately 70.0% for those with four or more risk factors [23]. Another line of research classified fall risk factors into those that have caused indoor and outdoor falls. Studies showed that indoor falls tended to occur among frail individuals, such as women with poor health characteristics. But outdoor falls were more likely to occur among more active people with healthier characteristics, such as fast gait speed and are heavily influenced by outdoor environmental characteristics [24,25,26]. By studying 2193 individuals aged 45 and over from five Northern California Kaiser Permanente Medical Centers between 1996 and 2001, Li and colleagues found that falls occurred outdoors more often than indoors among most seniors. Physical activity showed a positive association with risks of outdoor falls; and poorer health was revealed to be linked to a greater risk of indoor falls [27]. Kelsey and associates studied 765 women and men, mainly aged 70 and older, from randomly sampled households in the Boston Massachusetts area. They concluded that risk factors for indoor falls included older age, being female, and various indicators of poor health. Risk factors for outdoor falls were younger age, being male, and being relatively physically active and healthy [28]. Although numerous studies have identified factors of falling, few studies have focused on examining whether fall risk factors are similar for rural and urban Chinese older people. Additionally, no study has contrasted incidence, locations, circumstances, and consequences of falls among urban and rural older people in China. To address the above concerns, this study analyzes community-dwelling individuals aged 65 and over drawn from nationally representative data to: 1) reveal the incidence, locations, circumstances, and injuries relating to falls among older people in urban and rural China; 2) contrast environmental and personal risk factors for falls among rural and urban Chinese elders. The study contributes to the literature by improving our understanding on risk factors of falls among Chinese rural and urban elders. The results will be useful for creating optimal fall prevention strategies for older adults in both rural and urban China. Findings from China also provide useful implications for other countries. Data and sampling strategy Data come from the 2010 wave of the Chinese Longitudinal Survey on Urban and Rural Elderly, conducted by the Chinese Research Center on Aging (CRCA). Since 2000, the CRCA has started surveying community-dwelling rural and urban elders' personal characteristics and a variety of issues including retirement and employment after retirement, social welfare and security, housing, community-based service programs and utilization, family networks and social participation, medical insurance and health programs, mental health and available psychological consulting services. The 2000 and 2006 waves did not include questions on older people's fall related information. The 2010 wave is the first one that started collecting information on occurrences, incidence, circumstances, locations as well as consequences of elders' falls. Based on the distribution of Chinese population aged 60 and over reported by the 2010 Census data, the survey applied multistage proportional random selection strategy to choose respondents from 20 (out of a total number of 31, excluding Hongkong, Macau and Taiwan areas) provinces, autonomous regions and municipalities in China. The 20 provinces, autonomous regions and municipalities include Beijing, Hebei, Shanxi, Liaoning, Heilongjiang, Shanghai, Jiangsu, Zhejiang, Anhui, Fujian, Shandong, Henan, Hubei, Guangdong, Guangxi, Sichuan, Yunnan, Shaanxi and Xinjiang. According to the 2010 Chinese Census data, 49.7% of the population lived in urban areas and the rest of 50.3% lived in rural areas. Thus, the 2010 Wave of the Chinese Longitudinal Survey on Urban and Rural Elderly sampled equivalent numbers of rural and urban elders. In each province, autonomous region and municipality, 500 rural and 500 urban respondents were interviewed. The sampling design is as follows: (1) Randomly choosing 4 cities or 4 counties in each province/autonomous region/municipality (in municipalities, districts are treated as equivalent to cities or counties); (2) In each city or county, randomly selecting 16 streets or villages; (3) In each street or village, randomly choosing 50 neighborhood or village committees; (4) In each neighborhood or village committee, randomly choosing 10 households with at least one senior aged 60 and over to participate the survey interview. Please see Fig. 1 for sampling scheme of the 2010 wave of the Chinese Longitudinal Survey on Urban and Rural Elderly. Sampling Scheme of 2010 Wave of the Chinese Longitudinal Survey on Urban and Rural Elderly The survey was granted ethical approval from Research Ethics Committees of Chinese Research Center on Aging (CRCA). All subjects signed written informed consent before interview. The respondents also agreed that the anonymous information collected by the survey can be released for public use (including publication). Through door-to-door recruitment in randomly sampled households with at least one member aged 60 years or older, the 2010 wave of the survey obtained 20,000 respondents. To be consistent with other countries, this research defined the older population as those aged 65 and over. Accordingly, the analysis includes 16,393 respondents with 8440 and 7953 of them living in urban and rural areas, respectively. In the survey, the respondent was asked if he/she had experienced any falls in the past year. The definition of falls in this research follows the definition given by the World Health Orrganization, that is, "a fall is defined as an event which results in a person coming to rest inadvertently on the ground or floor or other lower level" (https://www.who.int/news-room/fact-sheets/detail/falls). If the respondent answered "yes", then the frequency of falls was also ascertained. There was also a question asking number of falls in the past year. Reasons, places, and consequences of falls were also asked for the falls during the past 12 months. These questions are used to measure older people's falls in this research. When using regression models to examine the effects of fall risk factors on elders' falls, the fall variable is coded as a categorical variable, i.e., whether the respondent reported any falls in the year before the survey year. Coding falls as a categorical rather than a continuous variable is preferred because fall related information was all based on elders' memories. Thus, it is easier for older people to recall whether they had falls than recalling how many falls they had in the previous year. In this sense, whether they had falls may capture more accurate information than numbers of falls. The personal characteristics include age, gender, education (1 = illiterate 0 = literate), annual household per capita income, and living arrangements (1 = living alone, 0 = living with others). Marital status is highly correlated with living arrangement patterns of the respondent. To avoid collinearity, marital status measure is not included in the analysis. Several measures evaluating the respondent's health status are applied in the study. The first measure is self-rated health, which is derived from the following question: "how would you rate your health status?" The answers are very poor, poor, fair, good and very good. The variable is coded as a continuous variable in the regression analysis. The second measure is the respondent's chronic disease status, the questionnaire asked: "how many chronic diseases do you have?" There were 25 types of chronic diseases in the answer list, including hypertension, dyslipidemia, diabetes or high blood sugar, cancer or malignant tumor, chronic lung diseases, liver disease, heart disease, stroke, kidney disease, stomach or other digestive disease, arthritis or rheumatism, asthma et al. In the preliminary analysis, correlation analyses were conducted and the study identified the top 10 chronic diseases that have the highest correlation coefficients with falls. Among the top 10 chronic diseases, the study keeps the top five that have caused the highest fall incidence rates among both rural and urban elders, which are cardiovascular diseases, arthritis, cervical and lumbar spondylosis, cerebrovascular disease, and hypertension. The third measure of the respondent's health condition is the vision variable which is coded as a continuous variable (1 = bad, 2 = fair, 3 = good). Cognitive function has been operationalized by a series of questions asking during the past month whether the respondent was not able to: 1) recognize relatives or friends, 2) remember names of relatives or friends, 3) find way home, 4) remember taking keys when going out, 5) remember food is being cooked on the oven. The variable is coded as "1" if the answer was "yes" and "0" if otherwise. The cognitive function variable thus has a minimum score of 0 and a maximum score of 5. Depressive symptoms are assessed by the Geriatric Depression Scale with a score of 8 or above (out of 15) indicating depression. Such a scale has been previously validated in the Chinese older population [29]. The elder's functioning status is assessed by activity of daily living (ADL) disabilities. It was surveyed by questions: "do you have any difficulty with dressing, bathing, eating, transferring, getting into or out of bed, using the toilet (including getting up and down)?" There are three options for the respondent to choose, which are "no, I don't have any difficulty", "have some difficulty" and "cannot do it at all". The case is coded as 1, 2 and 3, respectively. Then the above items are added as the value of ADL, which ranges from 6 to 18. The higher the ADL disability score, the lower the functioning status. The environmental factors are measured by three variables. The first variable comes from the question asking whether tap water is available in the household (1 = yes, 0 = no). This variable is considered as a measure of environmental factor because in some rural areas where tap water is not available at home, one needs to get water from the well. If no one helps the elder to get water from well then it is assumed that the likelihood of falling among elders increases. The second variable measures household type of the respondent (1 = high-rise building with escalators, 2 = high-rise building without escalators, 3 = one-story apartment). It is hypothesized that high-rise buildings without escalators increase the odds of older people's falls as compared to other household types. The last variable measures the respondent's perceptions towards his/her living conditions (1 = satisfactory, 0 = unsatisfactory). This measure, to a certain extent, captures the living condition of the respondent. Previous literature indicated that increased physical activity was associated with a decreased risk for chronic conditions (such as obesity and cardiovascular disease) as well as a lower likelihood of falling among older persons [2]. Thus, several physical activity variables are applied in this analysis, investigating whether older people playing Tai Chi, participating in muscle-toning exercises and taking a walk (1 = yes, 0 = no) influence elders' odds of falling. The respondent's life style is measured by two variables, that is, whether the respondent had a history of smoking and whether the respondent had a history of drinking (1 = yes, 0 = no). Descriptive information for all variables are presented in Table 1. Table 1 Descriptive Statistics of Variables, R Aged 65 and Over, China Descriptive analyses are used to show basic information of studied sample. Descriptive results are presented separately for urban and rural subgroups. Statistical tests are also applied to investigate whether the rural-urban differences are statistically significant. Chi-square tests are used for categorical variables and T-tests are applied for continuous variables (see Table 1 for details). When studying incidence, locations, circumstances and consequences of falls among older population, besides contrasting the rural-urban differences, gender differentials are also examined. Since fall related variables are categorical ones, Chi-square tests are applied in this section of the analysis (see Table 2 for details). Table 2 Falls by Residence, Sex and Age Group: R Aged 65 and over, China The dependent variable, whether the respondent had a fall in the year prior to the survey year, is a bivariate variable, multivariate logistic regression is used to investigate how various factors impact the respondent's odds of falling. Separate logistic regression models are constructed for rural and urban subgroups. The regression equation is as follows: $$ \log\;it(p)={\beta}_0+{\beta}_1{x}_1+{\beta}_2{x}_2+\dots .+{\beta}_p{x}_p $$ Where p represents the probability that the odds on Y is 1, meaning the respondent has a risk of falling. β0 is the constant, βp is the regression coefficient for xp. The regression results are presented in Table 3. The significance level of variables in Table 3 Logistic Regression Results of Fall Risk Factors on Falls: R Aged 65 and Over, China logistic regression models is as follows: * p < 0.05, ** p < 0.01, *** p < 0.001. Stata 14.0 is the statistical software used to conduct the analysis. The final step of the analysis is to explore whether regression coefficients for the urban subgroup are significantly different from those for the rural subgroup. The research uses Z-tests to perform the analysis. Z value is calculated by the following formula: $$ Z=\frac{b_1-{b}_2}{\sqrt{SE{b_1}^2+ SE{b_2}^2}} $$ where b1 is the regression coefficient of independent variable X for group 1 (urban subgroup), b2 is the regression coefficient of the same variable X for group 2 (rural subgroup), and SEb1 and SEb2 are the coefficient variances associated with the first and second groups, respectively. The calculated Z test values for the fall risk variables in logistic regression models are presented in Table 4. If the absolute value of Z for any one variable is less than 1.96, this indicates that the null hypothesis is accepted, i.e., the coefficient in one regression model is the same as the coefficient in another model. If the Z test value is greater than 1.96, the null hypothesis is rejected, signifying that the coefficient in the equation predicting the dependent variable is significantly different from the coefficient in the equation predicting the other. A rejection of the null hypothesis for a particular independent variable that its coefficients are the same in two regression models is also indicated by "No." An acceptance of the null hypothesis that the coefficients are the same is indicated by "Yes." Table 4 Z-Tests to Determine if Regression Coefficients are Significantly Different for Urban and Rural Subgroups: R Aged 65 and Over, China Descriptive results Incidence of falls As Table 2 shows, at least one fall occurred in previous 12 months among 14.0 and 17.0% of urban and rural respondents, respectively. Overall, rural elders reported a slightly higher percentage of falls. The percentage of women who fell is higher than that of men among both rural and urban respondents. Figure 2 indicates that the proportion of falls increases with age among both rural and urban subgroups. In each age group, the percentage of fallers is higher among rural than urban groups. For instance, the proportion of urban elders aged 65 to 74 who fell more than once is 5.4% and the correspondence rate is 8.3% among rural elders. Another important finding is that with age increasing, the percentages of respondents who fell more than once increase drastically. For example, urban multiple fallers aged 65 to 74 count about 5.4% of all respondents, such a percentage almost doubles for age group 75 to 84, and it amounts to 12.1% among urban subgroup aged 85 and over. The findings suggest that incidence of falls increases with age. The age effect on the rate of falls is particularly strong among multiple fallers. Chi-square tests show that differences in fall incidence between the rural and urban subgroups and between males and females are statistically significant. Thus, one has 95% confidence to claim that for the studied Chinese sample, urban elders are less likely to fell than their rural counterparts; and males are less likely to fell than females in both rural and urban areas. Percentage of Falls by Age Group and Residence: China Locations of falls Since multiple falls may have occurred to some respondents, the sums of percentages under "location of falls" and "circumstance of falls" exceed 100%. Table 2 shows that the percentage of falls on the road is the highest among all locations (35.0% of urban elders and 30.8% of rural elders fell on the road, respectively), followed by stairs, yard/community, and bedroom and living room within their homes or immediate home surroundings. A large proportion of falls occurred on surface levels such as bedroom, living room and kitchen. Relatively few falls occurred in public restrooms/shower rooms, shopping centers and parks. Results show that the location of falls is related to age, sex and residence. The number of falls occurring outside of home decreases with age. A corresponding increase occurs in the number of falls occurring inside of home. In general, more women fell inside of home than men. These findings are consistent with findings of previous research [30, 31]. The results indicate that the occurrence of falls is associated to exposure. Falls are likely to occur in situations where older people are undertaking their daily activities. According to Figs. 3-1 & 2, the most noticeable rural-urban difference is falls occurring in yard/community (45.9% of rural elders fell in yard/community vs. 13.4% of urban elders fell in yard/community). This is probably caused by different household structures in rural and urban China. Rural dwellers often have spacious yards whereas most urban dweller don't, which may have resulted in the proportion of falls in yards/communities being much lower among urban than rural respondents. The proportions of elders who fell in parks, shopping centers, restrooms, living rooms, and on stairs are higher among urban than among rural seniors. Such differences could be caused by divergent life styles among urban and rural residents. Chi-square tests indicate that the differentials shown in most locations of falls between the rural and urban subgroups are statistically significant. The differences shown in falls that occurred on the road and in the yard/communities only demonstrate significant differences between male and female elders in rural areas. This means that rural males have a higher risk of falling on the road whereas rural females have higher odds of falling in the yard/community. Significant gender differences also exist in terms of urban elders' falls that occurred in public washroom and doorsill. The likelihood of falling in public washroom and doorsill for urban female elders is higher than that for their male counterparts. a Location of Falls: Urban Seniors, China. b Location of Falls: Rural Seniors, China Circumstances of falls Respondents aged 65 and over have suffered most falls when taking a walk. "Fell while walking" constitutes over 70.0% of the reasons for falls (71.0% vs. 79.1% among urban and rural respondents, respectively). Sitting-down and getting-up, going to toilet, house-keeping and exercising also tend to be common circumstances of falls. The Chi-square tests reveal that circumstances of falls among older people show significant rural-urban differences. To illustrate, urban elders are more likely to fell when getting up and sitting down, bathing, and doing other activities. Whereas rural elders have higher odds of falling when taking a walk and picking up items. The circumstances of falls also vary by sex. More women than men fell when they were doing house-keeping related activities. More men than women fell when they were exercising, getting-up and sitting down. But Chi-square tests prove that the gender differentials are only shown among rural elders. Rural older men have a higher likelihood of falling when toileting; but rural older women show a higher risk of falling when picking up items. The circumstances of falls vary by age as well. Figures 4-1 and 2 indicate that with age increasing, the percentages of elders aged 85 and over who fell when getting up and sitting down as well as toileting become higher than other age groups. These findings suggest that diverse prevention strategies targeting towards different sexes and age groups are warranted. a Circumstance of Falls: Urban Seniors, China. b Circumstance of Falls: Rural Seniors, China Consequences of falls Falls may cause serious consequences. The results show that 12.3% of urban and 9.4% of rural older people reported hospital admission after the most recent falls. About 3.0% of urban and 4.0% of rural older people experienced long-term care after falls. The percentages of elders who needed clinical visits are 15.9 and 16.1%, respectively. About two-fifth of older people (40.8% of urban respondents and 41.2% of rural elders) reported minor injuries. The rest of the sample reported no injuries (26.9% of urban elders vs. 28.8% of rural elders). Readily available hospitals and medical resources in cities than in countryside may be a reason that has caused a higher hospitalization rate among urban elders. But based on Chi-square tests, no significant rural-urban differences are found in terms of consequences of falls. Significant gender differences are only found among the urban subgroup. Risk factors of falls Now the paper turns to the analysis of risk factors of falls. The article first presents the descriptive results of fall risk factors. The risk factors are grouped under five categories, which are personal characteristics, health status, environmental factors, physical activities and life styles. Table 1 shows that among the respondents, a slightly higher percentage of men are included among the rural subgroup. About half of the respondents are aged 65 to 74; and the oldest-old group is only 6.8 and 7.9% among the urban and rural subgroups, respectively. Nearly half of the rural respondents are illiterate whereas only 17.2% of the urban samples are illiterate. The household per capita income is much higher in urban than in rural areas. About 83.0% of the respondents reported living with others and the rest of them either lived alone or were in institutions. Chi-square tests and T-tests results indicate that significant rural-urban differences exist in all of the above personal characteristics varaibles. As to the respondents' health status, urban seniors reported a better vision than their rural counterparts. As compared to rural elders, higher percentages of urban elders reported having hypertension, heart disease, cervical and lumbar spondylosis and cerebrovascular disease. Urban respondents show a better average cognitive score than rural ones. The mean ADL disability score is higher for rural than for urban respondents (7.0 vs. 6.8). A higher proportion of rural respondents experienced depressive symptoms than their urban counterparts (34.5% vs. 17.3%). Chi-square test and T-test results indicate that except for the arthritis variable, significant rural-urban differences exist in all of the above health status varaibles. Overall, urban older people's health status is better than that of their rural counterparts. The environmental factors show significant rural-urban disparities. Nearly one third of rural elders claimed that tap water was not available in their households whereas only 2.0% of urban families had no access to tap water. The apartment types also vary drastically in urban and urban areas, with a majority of rural elders living in one-story apartments and urban dwellers living in high-rise buildings. Chi-square tests display that significant rural-urban differences exist in two of the above environmental varaibles (apartment type and availability of tap water). More urban elders reported having physical exercises than their rural counterparts. The percentages of smokers and drinkers are both higher among the rural subgroup. Chi-square tests show that significant rural-urban differences exist in all physical activity and life style dimensions studied in the research. Regression results Table 3 presents logistic regression results of fall risk factors on falls among rural and urban senior subgroups. With regard to personal characteristics, the results show that among the urban subgroup, men are 25.0% less likely than women to fall. With every 1 year increase in age, the odds of falling increase by 2.0%. For seniors who reported higher income, their risks of falling are also significantly reduced. Education or living arrangement patterns (whether living alone) does not have a significant effect on falls. None of the personal characteristics are found to be associated with older people's falls among the rural subgroup. All health status indicators show significant influence on both rural and urban elders' risks of falling. With the vision score increasing by every one unit, the risks of falling decrease by 8.0 and 22.0% among urban and rural elders, respectively. Similarly, with every one level increase in the respondent's self-rated health, the risks of falling among urban and rural elders drop by 26.0 and 29.0%, respectively. Both cognitive and ADL impairment are significantly associated with a higher risk of falls. Elders with depressive symptoms are 21.0 and 17.0% more likely to experience falls as compared to those without such symptoms among urban and rural elders, respectively. The presence of chronic diseases, including hypertension, cardiovascular disease, arthritis, and lumbar vertebrae disease are associated with an increased risk of falls among both rural and urban respondents. The odds of falling decreases 20.0% for those who reported having tap water available at home. This is because unavailability of tap water may force older people to go out and get water from nearby wells. This activity increases the likelihood of falling, especially for those seniors who live alone. As compared to the respondents who lived in one-story apartments, those who lived in high-rise buildings without escalators tend to have about 1.2 and 1.4 times of risks of falling in urban and rural areas, respectively. Fallers tend to be unsatisfied with their living conditions as compared to nonfallers. This is perhaps because unsatisfactory living condition usually links to some uncomfortable features, such as having no access to tap water, lacking of escalators in older apartments et al. These factors could cause elderly falls. Taking a walk, a variable measuring the respondent's physical activities, is found to decrease the risk of falls by about 18.0% among urban elders. No association was observed between practicing Tai Chi and aerobics and the occurrence of falls. Smokers do not show a higher risk of falls. But drinkers are 1.1 times more likely to fall than non-drinkers among the urban subgroup. These results suggest that most risk factors of falls based on studying Western elders are applicable to urban older people based on studying the sample. But only health status and environmental measures show significant effects on rural elders' falls. The Z-values in Table 4 indicate that for the most part the fall risk factor variables' effects on urban older people's odds of falls are not different from their effects on rural elders' risks of falling. Among the personal characteristics variables, only income plays a more important role predicting odds of urban elders' falling. And among the health status variables, vision has a greater effect on rural elders' risks of falling. Of the three physical exercise variables, only the walking variable shows a much stronger impact on urban elders' risks of falling as compared to its effect on rural elders' fall risks. This suggests that in China, despite higher incidence of falls among rural than among urban older people and the significant rural-urban differences for almost all fall risk factors, the studied fall risk variables account for similar amount of variation in rural and urban eldrs' odds of falling. And the effects of the independent variables predicting urban and rural elders' risks of falling are more similar than different. This research has studied incidence, locations, circumstances and consequences of falls and has further contrasted risk factors associated with falls among Chinese community-dwelling rural and urban elders aged 65 and over. Findings suggest that more falls occurred among rural than urban seniors; and the fall incidence rate is higher among females than males. In addition, more falls occurred outside than inside of the home environment, with the percentages of elders falling on the road (35.0%) and in the yard (45.9%) being the highest among urban and rural elders, respectively. Meanwhile, about 71.0% of the urban respondents and 79.0% of the rural respondents reported "walking" as the circumstances for falling. The regression results indicate that all environmental measures show significant effects on the odds of falling among both rural and urban subgroups, emphasizing an important role of environmental factors predicting senior falls. This finding somehow differs from Svensson, Rundgren and Landahl's (1992) study which showed a high proportion of falls occurring indoors [32]. The discrepancy can be caused by a relatively younger age structure of our sample. Indeed, recent studies on Chinese elders' falls and risk factors by Pi and associates (2015) and Wang et al. (2019) echoed the conclusion of this study [33, 34]. Through analyzing sample collected from surveying residents in 68 assisted living facilities in Houston, Chicago and Seattle in the United States, Lee and colleagues (2019) further revealed that adequately designed walkways and higher comfort levels when using outdoor areas were helpful in reducing elders' fear of falling, which can effectively reduce the risk of senior falls among assisted living residents [35]. Given findings of our research and other studies, enhancing environmental conditions, including better design and maintenance of sidewalks, walkways, streets, parks and recreational places is highly recommended. Regression analysis of this research has shown that availability of tap water and elevator in high rise buildings can significantly reduce the odds of falling among elders. Thus, it is essential to advance basic construction of infrastructural facilities. The fall prevention strategies drawn from this research fall in line with some fall prevention schemes proposed by the World Health Organization (WHO) [36]. Fall prevention strategies drawn from this research may be applicable to other societies. As to falls occurred in the home environment, some of them were under the circumstances that can be potentially modifiable as well. Home harzard assessment and modification are also suggested. Prevention may focus on risks for falling that arise when seniors are performing regular activities even though these may be perceived as low-risk events. These activities are such as dressing, cooking, toileting, transferring at home et al. Developing public health education programs targeting Chinese elders to increase their fall safety awareness and knowledge is merited. Besides environmental factors, the research also highlights the role of personal characteristics and life styles in determining elders' falls. The results show that better health, exercising and higher socioeconomic status (SES) can significantly reduce the odds of falling among Chinese elders. These findings imply that encouraging physical exercises and healthier life styles should be part of the education and self-management programs that prevent older people's falls. The World Health Organization (WHO) also had similar suggestions in its fall prevention reports [36]. The main part of this research has concerned whether factors associated with senior falls are different between rural and urban residents. Thus, the study applied factors that have been found influential on Western elders' risks of falling to predict Chinese rural and urban elders' odds of falling. Findings show that all five dimensions of fall risk factors are significantly associated with urban elders' falls. Except for income, vision and walking variables, all other fall risk factors show similar effects on urban and rural elders' risks of falling. In general, better physical and mental health as well as a higher level of satisfaction towards living environment lead to a lower risk of falling. The paper concludes that although rural and urban Chinese elders have significantly different incidence, locations, circumstances and consequences of falls, most risk factors of falls do not differ significantly. In this sense, senior fall prevention strategies that are found effective and successful among urban seniors should be applicable to rural Chinese elders and vice versa. Although gender diffrences in elderly falls is not a major concern of this research, the analysis does show some gender differentials in terms of locations, circumstances and consequences of falls. The finding reminds us that when the government and organizations are building programs to prevent senior falls, gender differences should not be ignored. We acknowledge that significant social changes have occurred since 2010 when the survey data were collected. For instance, the percentage of elder population aged 60 and over changed from about 8.9% (178 million) of the total Chinese population in 2010 to 17.3% (241 million) by the end of 2017. Aging related issues become even more urgent today. Meanwhile, the Report released by Chinese Aging Research Center pointed out that along with the improvement of medical treatment in the past 10 or so years, medical insurance coverage has been expanding, especially in rural areas. Medical expenditure has largely increased [37]. Improvement in medical treatment may have possibly lowered death rates caused by falls among Chinese elders. The third major social change among seniors is that under a fast trend of urbanization in China, the urban-rural elder population has been redistributed, with the percentage of urban elders increasing among the total senior population. This transition is associated with the change that the overall living condition of Chinese citizens has improved in both urban and rural settings. Tap water has become available in more rural families. These changes may have played a positive role in reducing elder falls, especially falls occurred to rural seniors. Despite the above improvement in the past 10 or so years, the Report by Chinese Aging Research Center also indicated some negative sides. The most apparent one is that the life style of Chinese elders has become more sedentary, with 50% of them reporting never excercising [37]. Meanwhile, more elders enjoy surfing the internet, doing online shopping, chatting on the website and et al. Since this research has discussed how environmental factors and life styles may influence elders' risks of falling, findings of this research have important implications for elderly fall prevention because social changes discussed above related to the environmental and life style dimensions. This study has several limitations. First, the data are based on a cross-sectional survey. As other cross-sectional studies, there is an inherent weakness of not being able to separate the cause-effect relationships of variables. In addition, recall bias may exist, especially among those elders who have poor memory. Since the data were obtained on the basis of past-year recall and only the most recent fall was queried in detail, it is possible that the frequency of falls was underreported. Prior research has shown that past-year falls are likely to be underreported by 13.0 to 32.0% [38]. Some errors may have occurred when reporting the locations, incidence and circumstances of falls. Thus, findings of this research need to be replicated by other researchers. A prospective study is warranted to identify the incidence, risk factors, and circumstances of falls. Since the main focus of the survey was not on falls of older people, the questionnaire was not able to exhaust all potential fall risk factors, such as medications the elders took, balance while standing, turning, changing position or walking, use of assistive devices et al. Future research can further examine cultural factors on falls, for example, the effect of squatting on fall incidence. The measures of environmental factors are also scarce. These have prevented the current study from exploring fall risk factors in a more comprehensive manner. Even with the limitations, this study is one of the first ones that analyze nationally representative data to identify occurrence, locations, circumstances, consequences and risk factors of falls among Chinese community-dwelling elders in both rural and urban areas. Findings of this research offer an important base for conducting future prospective studies and launching possible senior fall prevention programs in China and possibly other countries. The government and local agencies may consider applying some public health approaches to falls prevention. For example, reduction of income inequality, allocating more resources to disadvantaged groups et al. These factors are outside traditional individual centred clinical approach to falls prevention but they should still be considered. This is one of the first studies that contrast incidence, locations, circumstances and consequences of falls among rural and urban Chinese community-dwelling elders aged 65 and over. The study focuses on exploring whether fall risk factors differ among rural and urban elders in China. Results have revealed that incidence, locations, circumstances and consequences of falls vary among Chinese rural and urban older people. But most risk factors of falls show similar effects on rural and urban elders' odds of falling. The findings imply that successful fall prevention strategies based on urban experience in China can also be utilized to decrease falls among rural older people and vice versa. This research will be useful for creating optimal fall prevention strategies for older adults in China and maybe other countries as well. This article is based on a publicly available dataset derived from the 2010 wave of the Chinese Longitudinal Survey on Urban and Rural Elderly, conducted by the Chinese Research Center on Aging (CRCA). The dataset can be obtained after sending a data user agreement to the data team. Activities of daily living. Wu H, Ouyang P. Fall prevalence, time trend and its related risk factors among elderly people in China. Arch Gerontol Geriatr. 2017;73:294–9. Ruchinskas R. Clinical prediction of falls in the elderly. Am J Phys Med Rehab. 2003;82(4):273–8. Kelsey JL, Proctergray E, Hannan MT, Li W. Heterogeneity of falls among older adults: implications for public health prevention. Am J Public Health. 2012;102(11):2149–56. Graafmans WC, Ooms ME, Hofstee HM, Bezemer PD, Bouter LM, Lips P. Falls in the elderly: a prospective study of risk factors and risk profiles. Am J Epidemiol. 1996;143(11):1129–36. Er Y, Duan L, Ye P, Jiang Y, Ji C, Deng X, et al. Epidemiological characteristics of fall in old population: results from National Injury Surveillance in China 2014. Chinese J Epidemiol. 2016;37(1):24–8. Center for Health Statistics and Information (CHSI). National Health Service Survey Analysis Report on the Fifth Family Health Interview Survey. Beijing: State Council Information Office; 2012. Kwan MS, Close JCT, Wong AKW, Dsc SRL. Falls incidence, risk factors, and consequences in Chinese older people: a systematic review. J Am Geriatr Soc. 2011;59(3):536–43. Liang Z, Chen YP, Gu Y. Rural industrialization and internal migration in China. Rural Industrialization and Migration. 2002;39(12):2175–87. Poston D, Duan CR. The current and projected distribution of the elderly and eldercare in the People's republic of China. J Fam Issues. 2000;21(6):714–32. Solinger DJ. Contesting citizenship in urban China: peasant migrants, the state, and the logic of market. Berkeley: University of California Press; 1999. Wu X, Treiman DJ. The household registration system and social stratification in China: 1955-1996. Demography. 2004;41(2):363–84. Zhao Y. Leaving the countryside: rural-to-urban migration decisions in China. Am Econ Rev. 1999;89(2):281–6. Wu L. Inequality of pension arrangements among different segments of the labor force in China. J Aging Soc Policy. 2013;25(2):181–96. Jackson R, Howe N. The graying of the middle kingdom: the demographics and economics of retirement policy in China. The Center for Strategic and International Studies: Washington, D.C; 2004. Zeng Y, Wang Z. Dynamics of family and elderly living arrangements in China: new lessons learned from the 2000 census. China Rev. 2003;3(2):95–119. Zhang L. Research on Chinese elderly's living arrangements and desirable living arrangements patterns. Popul J (in Chinese). 2012;6:25–33. Ding ZH, Du SR, Wang MX. Research on the falls and its risk factors among the urban aged in China. Popul Dev (in Chinese). 2018;24(4):120–9. Becker C, Rapo K. Fall prevention in nursing homes. Clin Geriatr Med. 2010;26:693–704. Nevitt MC, Cummings SR, Kidd S, Black D. Risk factors for recurrent nonsyncopal falls: a prospective study. JAMA. 1989;251:2663–8. Rubenstein LZ, Josephson KR, Robbins AS. Falls in the nursing home. Ann Intern Med. 1994;121:442–51. Ruthazer R, Lipsitz LA. Antidepressants and falls among elderly people in long-term care. Am J Public Health. 1993;83(5):746–9. Northridge ME, Nevitt MC, Kelsey JL. Home hazards and falls in the elderly: the role of health and functional status. Am J Public Health. 1995;85:509–15. Chu LW, Chi I, Chiu AY. Incidence and predictors of falls. Ann Acad Med Singap. 2005;34(1):60–72. Bath PA, Morgan K. Differential risk factor profiles for indoor and outdoor falls in older people living at home in Nottingham. UK European J Epidemiol. 1999;15:65–73. Bergland A, Jarnlo GB, Laake K. Predictors of falls in the elderly by location. Aging Clin Exp Res. 2003;15(1):43–50. Bergland A, Pettersen AM, Laake K. Falls reported among elderly Norwegians living at home. Physiother Res Int. 1998;3(3):164–74. Li W, Keegan TH, Sternfeld B, Sidney S, Jr QC. Outdoor falls among middle-aged and older adults: a neglected public health problem. Am J Public Health. 2006;96(7):1192–200. Kelsey JL, Berry SD, Procter-Gray E, Quach HL, Uyen-Sa DT, Nguyen DS, et al. Indoor and outdoor falls in older adults are different: the maintenance of balance, independent living, intellect, and zest in the elderly of Boston study. J Am Geriatr Soc. 2010;58(11):2135–41. Yesavage J, Brink TL. Development and validation of a geriatric depression scale: a preliminary report. J Psychiatr Res. 1983;17:39–49. Campbell AJ, Borrie MJ, Spears GF, Jackson SL, Brown JS. Circumstances and consequences of falls experienced by a community population 70 years and over during a prospective study. Age and Aging. 1990;19(2):136–41. Lord SR, Ward JA, Williams P, Anstey KJ. An epidemiological study of falls in older community-dwelling women: the Randwick falls and fractures study. Aust J Public Health. 1993;17:240–5. Svensson ML, Rundgren A, Landahl S. Falls in 84- to 85-year-old people living at home. Accid Anal Prev. 1992;24:527–37. Pi HY, Hu MM, Zhang J, Peng PP, Nie D. Circumstances of falls and fall-related injuries among frail elderly under home care in China. Int J Nurs Sci. 2015;2:237–43. Wang QQ, Zhang YZ, Wu CA. An analysis of fall risk factors among Chinese elders: based on studying China health and retirement longitudinal study (CHARLS) data. Chin J Gerontol. 2019;39:3794–8. Lee SM, Lee C, Rodiek S. Outdoor exposure and perceived outdoor environments correlated to fear of outdoor falling among assisted living residents. Aging Ment Health. 2019;31:1–9 Published online. World Health Organization. WHO global reports falls prevention in older age. 2007. Chinese Aging Research Center. Aging blue book: survey report on Chinese rural and urban elders' living condition. 2018. Cummings SR, Nevitt MC, Kidd S. Forgetting falls: the limited accuracy of recall of falls in the elderly. J Am Geriatr Soc. 1988;36:613–6. The data used in the study come from Chinese Longitudinal Survey on Urban and Rural Elderly, conducted by the Chinese Research Center on Aging (CRCA). The authors would like to thank the above institute and members of the institute. This research is supported by Program for Young Innovative Research Team at China University of Political Science and Law (Grant No. 19CXTD-04). The funder had no role in the design of the study, in collection, analysis and interpretation of data, and in writing the manuscript. China University of Political Science and Law, Beijing, 102249, China Li Zhang & An Li Central University of Finance and Economics, Beijing, 102206, China Zhihong Ding Peking University, Beijing, 100871, China Liya Qiu Search for Li Zhang in: Search for Zhihong Ding in: Search for Liya Qiu in: Search for An Li in: LZ conducted literature review, analyzed data and drafted the text. ZHD, LYQ and AL analyzed and interpreted the data. All authors read and approved the final manuscript. Correspondence to Li Zhang. The dataset used in this study is a publicly available dataset. Not applicable. Zhang, L., Ding, Z., Qiu, L. et al. Falls and risk factors of falls for urban and rural community-dwelling older adults in China. BMC Geriatr 19, 379 (2019) doi:10.1186/s12877-019-1391-9 Rural-urban difference Physical functioning, physical health and activity
CommonCrawl
Realistic propagation effects on wireless sensor networks for landslide management Nattakarn Shutimarrungson1 & Pongpisit Wuttidittachotti ORCID: orcid.org/0000-0002-0076-48821 EURASIP Journal on Wireless Communications and Networking volume 2019, Article number: 94 (2019) Cite this article This paper presents the development of propagation models for wireless sensor networks for landslide management systems. Measurements of path loss in potential areas of landslide occurrence in Thailand were set up. The effect of the vegetation and mountain terrain in the particular area was therefore taken into account regarding the measured path loss. The measurement was carried out with short-range transmission/reception at 2400 MHz corresponding to IEEE 802.15.4 wireless sensor networks. The measurement setup was divided into two main cases, namely, the transmitting and receiving antennas installed on the ground and 1-m high above the ground. The measurement results are shown in this paper and used to develop propagation models suitable for operation of short-range wireless sensor networks of landslide management systems. The propagation model developed for the first case was achieved by fitting the averaged experimental data by the log-normal model plus the standard deviation. For the second case, the model was derived from the ray tracing theory. The mountain-side reflection path was added into the model which contained the reflection coefficient defined for the soil property. Furthermore, the resulting propagation models were employed in order to realistically evaluate the performance of wireless sensor networks via simulations which were conducted by using Castalia. In the simulations, the sensor nodes were placed as deterministic and random distributions within square simulated networks. The comparison between the results obtained from the deterministic and random distributions are discussed. A landslide, which is a globally widespread and short-lived phenomenon, causes not only a number of human losses of life and injury but also extensive economic damage to private and public properties. The main factors of landslide occurrences are steep slope angles along with accumulated rainfall, moisture, and pore pressure saturation in the soil [1]. Thailand, located at the center of peninsular Southeast Asia and covered by a number of mountainous plateau areas, is one of the countries that most face rainfall-induced landslides every year [2]. In order to avoid or reduce the loss due to landslide disasters, there is a need for a landslide management system that can monitor and/or predict landslide occurrence. A landslide management system is an essential key to reducing losses due to landslides by generating early warning for people living in potential landslide areas. In order to achieve an underlying system, sensors such as rain gauges, moisture sensors, piezometers, tiltmeters, geophones, and strain gauges can be installed in the potential landslide areas in order to collect the essential information needed to perform data analysis for landslide monitoring and prediction. Some examples of the use of sensors to monitor and/or predict landslides can be seen in [1, 3,4,5,6]. Besides sensor technologies, a communication network is also required for sending the information collected by sensors. Wireless sensor networks have received considerable interest in the research area of landslide monitoring and prediction, as seen in examples [1, 5,6,7]. Another example was proposed to use a wireless sensor network in order to collect ambient data for general applications including landslide monitoring and prediction [8]. The performance of IEEE 802.15.4 commonly known as Zigbee, leading technology of short-range wireless sensor networks, was measured to verify that it can be used for a wide variety of applications [9]. In [10], the wireless sensor network was developed and then installed on the landslide area in Italy in order to monitor and manage the risks of landslides. Several parameters collected by sensors equipped with the coordinator of the wireless sensor network were employed to assess the possible risks and to provide useful information for an early warning system. Furthermore, an open-source wireless sensor system, called SMARTCONE, was designed and implemented for detection of the occurrence of the slope movement and debris flow of the hillside [11]. The performance of the proposed system was elevated via experimentation. From the literature review, another interest is that the deployment of a wireless sensor network inside a potential landslide area requires knowledge of the node distance relevant to the path loss in order to provide full connectivity. An appropriate propagation model employed to predict such a path loss and the received-signal coverage is therefore essential for network planning. Extensive research has been conducted on propagation models, including theoretical and empirical models of wireless sensor networks. Some empirical models have been proposed in order to determine the path loss for a wide range of operating frequencies [12,13,14,15,16,17,18]. Although these proposed models are simple, there is no parameter that controls the relationship between the models and the forest environments. In [19], a half-space model for dealing with wave propagation at the frequency of 1–100 MHz in forest areas was proposed. In this approach, the associated phenomenon dominated by a lateral wave mode of propagation was also discussed. Subsequently, this approach was extended to the dissipative dielectric slab model in order to take the ground effect into account for the wave propagation at the frequency of 2–200 MHz in the forest environment [20]. Recently, near-ground wave propagation was examined in a tropical plantation as seen in the example [21], where the experiment was conducted at very high frequency (VHF) and ultra-high frequency (UHF) bands. In this approach, the ITU-R model was slightly modified by taking the lateral wave effect into account. Moreover, the ITU-R model was further improved with considering the effect of the rain attenuation that was measured in Malaysia [22]. As mentioned, although those approaches are simple and valid for wave propagation in forest areas, they do not consider the realistic effects due to the environment in the context of landslide areas, especially in Thailand. In this paper, the measurement of path loss in one of the potential landslide areas in Thailand was examined. The measurement results were then employed to develop appropriate propagation models for particular landslide-monitoring/prediction applications. Simulations with realistic propagation effects were conducted in order to evaluate the performance of applying the wireless sensor network to the landslide management systems. The main contributions of this paper are summarized as follows. The measurement of the path loss was set up in accordance with the practical situation of applying the wireless sensor network to the landslide management of Thailand, whose climate and terrain are unique. In the measurement, the transmitting and receiving antennas of the short-range wireless sensor network at an operating frequency of 2400 MHz were placed on the ground and at a 1-m height above the ground. This was done in order to investigate the effect of the antenna height on the propagation model. The propagation models were then developed for the two cases, namely, the transmitting and receiving antennas installed on the ground and 1-m high above the ground. The first model was achieved by fitting the averaged experimental data by the developed model with the log-normal model plus the standard deviation. The second model was derived from the ray tracing theory. The mountain-side reflection path was added to the model which contained the reflection coefficient defined for the soil property. The evaluation of the performance of the wireless sensor networks was presented via simulations, where the realistic wave propagation was taken into account. Following this, the propagation prediction models are discussed along with the derivation of their equations in Section 2. These models are employed to compare with the model being proposed in this paper. The measurement and estimation of the path loss of the wireless sensor network are presented in Section 3. The simulation setup for evaluating the performance of the wireless sensor network for landslide management systems is discussed in Section 4, where their results and discussion are presented. Finally, the conclusions are drawn in Section 5. All results are discrbied by the data set of simulation and experimental results in Section 6 Additonal file 1. Basic principles of propagation prediction models In this section, we give an overview of the basic principles of propagation prediction models which were employed to compare with the proposed ones. There are several propagation models that have been employed to estimate path loss. One of the most simple and popular models is the free space loss as given by [23]: $$ {\mathrm{PL}}_{\mathrm{free}}\left(\mathrm{dB}\right)=-27.56+20{\log}_{10}(f)+20{\log}_{10}(d) $$ where f and d, respectively, are the frequency in megahertz and the distance between the isotropic transmitting and receiving antennas in meters. This theoretical propagation model is practicable for operation in the far-field region when there are no obstacles in the first ellipsoid of the Fresnel zone. In [12], Weissberger developed a new empirical model that can estimate excess attenuation due to vegetation as written in the following: $$ {\mathrm{PL}}_{\mathrm{weissberger}}\left(\mathrm{dB}\right)=\left\{\begin{array}{l}0.45{f}^{0.284}d\kern2.5em d\kern0.5em <\kern0.5em 14\mathrm{m}\\ {}1.33{f}^{0.284}{d}^{0.588}\kern1em 14\mathrm{m}<\kern0.5em d\kern0.5em \le \kern0.5em 400\mathrm{m}\end{array}\right. $$ where f and d are in gigahertz and meters, respectively. The COST 235 models are proposed in [16] based on measurements conducted with a millimeter wave band between 9.6–57.6 GHz through a grove of trees. The model divides the propagation scenario into two different conditions as follows: $$ {\mathrm{PL}}_{\mathrm{COST}}\left(\mathrm{dB}\right)=\left\{\begin{array}{l}15.6{f}^{-0.009}{d}^{0.26}\kern2em \mathrm{in}\kern0.5em \mathrm{leaf}\\ {}26.6{f}^{-0.2}{d}^{0.5}\kern3em \mathrm{out}\kern0.5em \mathrm{of}\kern0.5em \mathrm{leaf}\end{array}\right.\kern1em $$ where f and d are in megahertz and meters, respectively. In [15], the FITU-R model was developed based on the ITU-R recommendation [13]. The optimization method for the numerical parameters using the least squared error that fit several sets of measurement data at the frequency of 11.2 and 20 GHz was presented. The model was written as: $$ {\mathrm{PL}}_{\mathrm{FITU}\hbox{-} \mathrm{R}}\left(\mathrm{dB}\right)=\left\{\begin{array}{l}0.39{f}^{0.39}{d}^{0.25}\kern2em \mathrm{in}\kern0.5em \mathrm{leaf}\\ {}0.37{f}^{0.18}{d}^{0.59}\kern2em \mathrm{out}\kern0.5em \mathrm{of}\kern0.5em \mathrm{leaf}\end{array}\right. $$ where f and d are in megahertz and meters, respectively. The FITU-R model was also modified for the VHF and UHF bands from the measurements in a palm plantation at the frequency of 240 MHz and 700 MHz. The modification takes the excess foliage loss with the lateral wave effect into account. This model is called a lateral ITU-R model [24], which can be used for long-range propagation in foliage areas and is defined by: $$ {\mathrm{PL}}_{\mathrm{LITU}\hbox{-} \mathrm{R}}\left(\mathrm{dB}\right)=0.48{f}^{0.43}{d}^{0.13} $$ where f and d are in megahertz and meters, respectively. This model is valid for in-leaf case. In addition to the theoretical and empirical propagation models discussed above, a log-normal model, one of the most popular models based on the probabilistic distribution of the additional attenuation, has been widely used to predict path loss [25]. This model is defined by: $$ {\mathrm{PL}}_{\log \hbox{-} \mathrm{normal}\kern0.17em \mathrm{model}}\left(\mathrm{dB}\right)=\mathrm{PL}\left({d}_0\right)+10n{\log}_{10}\frac{d}{d_0}+{X}_{\sigma } $$ where n is the path loss exponent indicating the rate at which the signal attenuates with the distance. Generally, n is equal to 2 for free space. PL(d0) is the path loss at a known reference distance d0 in the far-field region. Xσ denotes a zero-mean Gaussian-distributed random variable (in dB) with standard deviation σ (in dB). Experimentally, it has been found that the path loss in cluttered multipath environments is log-normally distributed, involving shadowing effects. In order to guarantee the suitability of the models, the root means square error (RMSE) [15] between the data obtained from the predicted models and measurement should be determined. The RMSE is a useful tool that can be used to measure the difference between the path loss predicted by a model and measured by the radio frequency (RF) equipment. The RMSE is given by: $$ \mathrm{RMSE}=\sqrt{\frac{\sum_{i=1}^k{\left({X}_{\mathrm{obs},i}-{X}_{\mathrm{model},i}\right)}^2}{k}} $$ where Xobs and Xmodel are measured and predicted data, while k denotes the number of samples. Measurement and propagation estimation In this section, we discuss the setup of the measurement of the path loss. The measurement results are shown and then discussed as well. The propagation models suitable for the operation of wireless sensor networks of landslide management systems in a particular area of Thailand are introduced based on the existing models and our measurement results. Measurement setup In order to develop an appropriate propagation model for the short-range wireless sensor network of the landslide management systems, measurements were conducted in a potential landslide area on the small mountain of Nakhon Ratchasima province in Thailand during the rainy season, when the average monthly rainfall was 80 mm. The area chosen for the measurement was similar to a bald mountain where the terrain mainly consists of soil and sand. Figure 1 shows a basic block diagram of our measurement setup. The RF propagation measurements were performed at 2400 MHz by using RF equipment, including a signal generator (83620B Hewlett Packard), a spectrum analyzer (N9020A MXA Agilent technologies), and transmitting and receiving antennas. The continuous wave (CW) was generated by using a signal generator at 2400 MHz with the power of 17 dBm. The vertically polarized omnidirectional antennas with a typical gain of 5 dBi were employed for the measurement. The loss in the RF cable was 5.6 dB. The measurement data were captured by using a spectrum analyzer and stored into a control computer via GPIB interface for post-processing. In this paper, the measurement was divided into two different main cases. First, the transmitting and receiving antennas were placed on the ground since it is easy to place many small sensors, including transmitters and receivers, on ground in order to measure the data, such as seismic vibrations, average monthly rainfall, and relative humidity, which are used for landslide detection in the practical situation of wireless sensors networks. Second, the height of the receiving and transmitting antennas was 1 m above the ground. This was done in order to determine the effect of the antenna height on the propagation model. In some scenarios, a 1-m antenna tower can be probably installed in the landslide area. Figure 2 shows the RF-propagation measurements done in the potential landslide area. In order to study the short-range wave propagation in such an area, the distance d between the transmitting and receiving antennas was varied from 0.5 to 50 m with a step of 0.5 m, resulting in 100 measured positions. In an individual position, the measurement was repeated 30 times in order to achieve accurate results. The system calibration of the measurement data was performed by the removal of the antenna gain and the cable loss of the transmitter and receiver. Note that our measurement was done horizontally on the mountain. The vertical direction was no longer under consideration because of the limitation of measurement setup. However, this problem will be resolved and discussed in a future publication. Basic block diagram of the measurement setup Measurement setup. a First case. b Second case Measurement results and developed propagation models In order to evaluate the possibility of the use of the well-known propagation models, including COST235, FITU, LITU-R, Weissberger, log-normal, and free space models, to achieve the appropriate path-loss prediction in the landslide area, the measured path loss versus the distance for the first case of the antennas on the ground were plotted, together with its average and COST235, FITU, LITU-R, Weissberger, log-normal, and free space models, as shown in Fig. 3. In the figure, the ability to predict the path loss using COST235, FITU, LITU-R, Weissberger, and free space models becomes poor when the distance increased. These models underestimated the path loss significantly by up to 68 dB at 25 m under the measured data. On the other hand, the log-normal model, whose path loss exponent was initially deduced as n = 2.4, was more suitable to be used for curve fitting. The path loss exponent was then varied as n = 1.5, 1.6, 1.7, 1.8, and 1.9. Figure 4a shows the path loss of the log-normal model with varying the path loss exponent. The root mean square (RMS) error given in (7), between the measured data and path loss predicted by the log-normal model, was calculated in order to investigate the performance of the fitted curve. The RMS errors obtained from the log-normal models with n = 1.5, 1.6, 1.7, 1.8, and 1.9 were 3.16, 2.63, 3.02, 4.08, and 5.43, respectively. It can be seen that the path loss of the log-normal model with n = 1.6 was closest to that of the averaged measurement data. In this paper, we developed the propagation model from our measurement results of wireless sensor networks, specifically for landslide management systems. Reconsidering the measurement results, the measured path loss at 1 m was 52.53 dB and was then chosen as the reference distance path loss PL(d0). Thus, the propagation model developed for the wireless sensor network for the landslide management systems was achieved on the basis of the log-normal model and our measurement data as given by: Path loss obtained from measurement and prediction of the first case with antennas on the ground Path loss of a log-normal model and b developed propagation model, compared with measurement data of the first case with antennas on the ground $$ {\mathrm{PL}}_{\mathrm{on}\;\mathrm{ground}}\left(\mathrm{dB}\right)=\mathrm{PL}\left({d}_0\right)+10n{\log}_{10}d+\mathrm{SD} $$ where SD denotes the measured standard deviation of the received power around the average. Here, the standard deviation SD of this measurement case was 8.42. The path loss PL(d0) at a known reference distance d0 = 1 m was 52.53 dB. Figure 4b shows the path loss of the developed propagation model with varying its path loss exponent as n = 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, and 2. The RMS errors between the developed propagation model and averaged measurement data were 3.20, 3.07, 3.48, 4.30, 5.33, 6.47, and 7.68 when n = 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, and 2, respectively. Note that the path loss of the developed propagation model with SD = 8.42 and n = 1.5 was almost identical to that of the predicted conventional log-normal model. This implies that we can use our developed propagation model instead of the conventional log-normal model for the wireless sensor network for the landslide management systems. Figure 5 shows the predicted and measured path losses obtained from the second case of a 1-m antenna height above the ground. In the figure, the log-normal and free space models were employed to predict the path loss. It was seen that the log-normal models with different path loss exponents, i.e., n = 1.8, 2.0, and 2.4, overestimated the measured path loss significantly. The RMS errors of the use of the log-normal models with n = 1.8, 2.0, and 2.4 were 8.50, 11.42, and 17.63, respectively. On the other hand, the RMS error of the use of the free space model was 3.21. Although the use of this model to predict the measured path loss was more suitable than using the log-normal model, the RMS error was somewhat high as compared with the former case. Note that there were some significant fluctuations which appeared at the distance 21 to 26 m of the measured data. A propagation model specific for the wireless sensor network of the landslide management systems should be considered when employing 1-m-high antennas above the ground. In this paper, we introduced the development of the propagation model by using a multiple-ray tracing model to fit the measured data. Path loss obtained from measurement and prediction of the second case with 1-m antenna height above the ground In case of the 1-m-high antennas, we determined a basic geometry model for the measurement setup again, as depicted in Fig. 6. The transmitting and receiving antennas were placed on the mountain at heights of hT and hR, respectively. The d denotes the direct distance between transmitting and receiving antennas. In this paper, the propagation model developed by using the multi-ray tracing considers three major coexisting transmission paths, namely, line-of-sight (LOS), ground reflection, and mountain-side reflection paths. Note that the mountain-side reflection path was added to the developed model instead of the conventional two-ray ground-reflection model. The total received power was calculated from the combination of the individual received power from LOS, ground reflection, and mountain-side reflection paths. Based on the Friis transmission formula, we derived the equation of the received power from three-ray tracing including line-of-sight (LOS), ground reflection, and mountain-side reflection paths, as written by: $$ {P}_r={P}_t{\left(\frac{\lambda }{4\pi}\right)}^2{\left(\frac{\sqrt{G_{\mathrm{LOS}}}}{d_{\mathrm{LOS}}}-{\Gamma}_{\mathrm{gr}}\frac{\sqrt{G_{\mathrm{gr}}}{e}^{-j2\pi \left({d}_{\mathrm{gr}}-{d}_{\mathrm{LOS}}\right)/\lambda }}{d_{\mathrm{gr}}}-{\Gamma}_m\frac{\sqrt{G_m}{e}^{-j2\pi \left({d}_m-{d}_{\mathrm{LOS}}\right)/\lambda }}{d_m}\right)}^2 $$ The basic geometry of the experimentation setup on the mountain in the second case of 1-m-high antennas where Pr and Pt denote the received and transmitted powers, respectively. GLOS, Ggr, and Gm are the combined antenna gains along the LOS, ground reflection, and mountain-side reflection paths. Since the pattern of all antennas employed in our measurement was omnidirectional, the GLOS, Ggr, and Gm were therefore equal. The Γgr and Γm were reflection coefficients of the ground and mountain side, respectively. Since the measurement setup was conducted on a mountain, the Γgr and Γm were equal as well, which were re-denoted as Γ. Practically, the reflection coefficient Γ can be achieved by measurement of the soil property. The use of the reflection coefficient for path loss prediction is very useful when there is a need to change the considered landslide area having different soil properties. The distances of the wave travel from the transmitting antenna to the receiving antennas along the LOS, ground reflection, and mountain-side reflection paths were denoted as dLOS, dgr, and dm, respectively. Applying trigonometry to this problem, the distances dLOS, dgr, and dm can be given by: \( {d}_{\mathrm{LOS}}=\sqrt{{\left({h}_T-{h}_R\right)}^2+{d}^2} \) (10) $$ {d}_{\mathrm{gr}}=\sqrt{{\left({h}_T+{h}_R\right)}^2+{d}^2} $$ $$ {d}_m=\sqrt{{\left({h}_T\cot \theta +{h}_R\cot \theta \right)}^2+{\left(d/2\right)}^2} $$ These equations indicate that the distances dLOS, dgr, and dm depended upon the antenna height, mountain slope θ, and direct distance d between the transmitting and receiving antennas. To simplify (9) and to fit the experimental data with the path loss PL(d0) at a known reference distance d0, the total path loss can be rewritten as $$ {\mathrm{PL}}_{\mathrm{above}\kern0.17em \mathrm{ground}}\left(\mathrm{dB}\right)=\mathrm{PL}\left({d}_0\right)+20\log \left[\frac{1}{d_{\mathrm{LOS}}}-\Gamma \frac{e^{-j2\pi \left({d}_{\mathrm{gr}}-{d}_{\mathrm{LOS}}\right)/\lambda }}{d_{\mathrm{gr}}}-\Gamma \frac{e^{-j2\pi \left({d}_m-{d}_{\mathrm{LOS}}\right)/\lambda }}{d_m}\right] $$ Here, the path loss PL(d0) at a known reference distance d0 = 1 m is 37.54 dB. Since the short distances dLOS, dgr, and dm were not approximately equal because the determined distance was short. Figure 7 shows the path loss of the developed propagation model based on the multi-ray tracing, compared with that of the free space model. Here, the slope of the mountain was set as θ = 30°. The reflection coefficient Γ, which mainly depends upon the soil property, was varied in order to investigate the appropriation of the developed propagation model. Γ = − 1 indicates that the soil reflects all of the transmitted power to the receiving antennas. The RMS errors obtained from the developed propagation model were 3.19, 4.48, 2.94, and 3.22 when Γ = 0, − 0.4, − 0.8, and − 1, respectively. The developed propagation model based the multi-ray tracing and our measurement data achieved the smallest RMS error when Γ = − 0.8. Moreover, it should be noted that the path loss obtained from the developed propagation model when Γ = 0 which means the transmitted power was completely absorbed by the soil, was almost identical to that of the free space model. This indicates that there was no reflection from the ground and mountain-side paths. There exists only the LOS path in the path loss of the predicted results. This also reveals that our developed propagation model can be applied to other mountains that possess different soil properties by choosing the appropriate reflection coefficient Γ. Path loss of the free space model and developed propagation model of the second case with 1-m antenna height above the ground Practical simulations for wireless sensor networks Simulation scenario Simulations were conducted using Castalia, an extension of OMNET++, in order to evaluate the performance of the wireless sensor networks for landslide management systems. In our simulations, the measurement results presented in the previous section were employed instead of the default radio model. The realistic propagation effects on the wireless sensor networks were therefore taken into account. The IEEE 802.15.4 standard was chosen as Physical (PHY) and Medium Access Control (MAC) layers with an operating frequency of 2400 MHz, corresponding to that of our measurement setup. The routing protocol deployed for the wireless sensor networks of the landslide management systems was an Ad hoc On-Demand Distance Vector (AODV). The simulations were run at a fixed duration of 500 s with a packet generated at a constant packet rate of 0.5 packets per second. The wireless channel bit rate was 250 kbps. All sensor nodes placed in the simulations were configured to set their transmission power at 57.42 mW and their sensitivity value at − 100 dBm based on the Chipcon CC2420 transceiver chip. Omnidirectional antennas with a typical gain of 5 dBi were employed for signal transmission and reception. The simulations were set up corresponding to the practical situation in the potential landslide area of Thailand. Simulations were distinguished into two main catalogs in accordance with the measurement cases. First, the path loss obtained from the measurement with the antennas placed on the ground was employed as a radio model in the simulations in order to investigate the realistic propagation effects due to the antenna position and other environmental parameters. Second, we employed the measurement results of the case of 1-m-high antennas above the ground in order to demonstrate the effect of the antenna height on the performance of the wireless sensor networks. Figure 8 shows the network simulation models. In each catalog of simulations, the node positions of the simulated networks were deterministically and randomly distributed within a 300 m × 300 m square as shown in Fig. 8a and b, respectively. In practical situations, sensors installed to predict a landslide occurrence are probably deployed either randomly or deterministically depending on the selection of the user. Thus, the deterministic and random distribution of the node placement should be determined. This simulation consisted of N × N sensor nodes. The distance between two adjacent sensor nodes in our simulation model depended upon the number of placed sensor nodes. All of the sensor nodes were stationary, and the sink was located at the bottom right of the network area as shown in the figure. It should be noted that the sink can send information gathered by the sensor nodes to the gateway node, which may be installed outside the potential landslide area. The gathered data received from the gateway node through communication networks such as Internet or mobile networks can be used to calculate/predict the landslide occurrence at the monitoring center. Network simulation models with a deterministic and b random distributions In order to evaluate the performance of the wireless sensor networks for the landslide management systems, the matrices, that are the packet loss rate and packet delivery ratio, are determined in this section. The number of sensor nodes was varied from 10 × 10 to 20 × 20. Figure 9 shows the packet loss rate and packet delivery ratio versus the number of sensor nodes when the antennas were placed on the ground. It was seen that the packet loss rate increased proportionally to the number of nodes. The collision probably occurs when the density of nodes is high. The packet loss rate slightly decreased at N = 20 × 20 since the probability of success of finding a communication path was high. The packet delivery ratio decreased when the number of nodes increased. Although the packet loss rate suddenly decreased when N = 16 × 16, compared with that when N = 15 × 15, the packet delivery ratio still was high. In the case of the deterministic node distribution, the distance between the two neighbor nodes of N = 16 × 16 and N = 14 × 14 was 23 and 20 m, respectively. The packet loss rate of the case of the random node distribution was less than 40% for all node numbers under consideration while that of the deterministic node distribution was greater than 40% for N = 16 × 16 to N = 20 × 20. On the other hand, the packet delivery ratio of the random node distribution was greater than 70% of all node numbers under consideration while that of the deterministic node distribution was less than 70% for N = 16 × 16 to N = 20 × 20. Note that simulation results obtained from the use of the proposed model and the log-normal model were almost identical. a Packet loss rate and b packet delivery ratio of the first simulation case with antennas on the ground Figure 10 shows the packet loss rate and packet delivery ratio versus the number of sensor nodes when the antennas were placed at a 1-m height above the ground. The packet loss rate and packet delivery ratio of this case were similar to those when the antenna was placed on the ground. The random node distribution had a slightly better packet loss rate and packet delivery ratio than the deterministic node distribution. The simulation results obtained from the use of the proposed model and free space model were almost identical as well. a Packet loss rate and b packet delivery ratio of the second simulation case with 1-m-high antennas above the ground In this paper, the development of propagation models has been proposed for wireless sensor networks for landslide management systems. The propagation models were developed based on the measurement data and existing propagation models. For the development, the measurement was set up using two main cases—transmitting and receiving antennas installed on the ground and 1-m high above the ground. The propagation models for the first and second scenarios were developed based on the log-normal model and multi-ray tracing models along with our experimental data, respectively. The path loss versus distance was shown to validate that the developed propagation models were suitable for the operation of landslide management systems using a wireless sensor network. Furthermore, the resulting propagation models were employed in order to realistically evaluate the performance of wireless sensor networks via simulations, which were conducted using Castalia. In the simulations, the sensor nodes were placed as deterministic and random distributions. The simulation results have been shown to confirm that the short-range wireless sensor network at an operating frequency of 2400 MHz can be employed for landslide management systems. AODV: Ad hoc on Demand Distance Vector FITU: Fitted ITU IEEE: The Institute of Electrical and Electronics Engineers ITU-R: The International Telecommunication Union-Radiocommunications Sector LITU-R: Lateral ITU Media access control PHY: RMSE: Root mean square error UHF: Ultra high frequency VHF: Very high frequency M.V. Ramesh, V.P. Rangan, Data reduction and energy sustenance in multisensor networks for landslide monitoring. IEEE Sensors J. 14(5), 1555–1563 (2014) O. HJ, S. Lee, W. Chotikasathien, C.H. Kim, J.H. Kwon, Predictive landslide susceptibility mapping using spatial information in the Pechabun area of Thailand. Environ Geol 57(3), 641 (2009) S. Biansoongnern, B. Plungkang, S. Susuk, Development of low cost vibration sensor network for early warning system of landslides. Energy Procedia 89, 417–420 (2016) M.V. Ramesh, in IEEE, SENSORCOMM'09. Third International Conference on Sensor Technologies and Applications, 405-409. Real-time wireless sensor network for landslide detection (2009) G.R. Teja, V.K.R. Harish, D.N.M. Khan, R.B. Krishna, R. Singh, S. Chaudhary, in 2014 IEEE International Advance Computing Conference (IACC). Land slide detection and monitoring system using wireless sensor networks (wsn) (2014), pp. 149–154 A. Terzis, A. Anandarajah, K. Moore, I. Wang, in Proceedings of the 5th ACM International Conference on Information Processing in Sensor Networks. Slip surface localization in wireless sensor networks for landslide prediction (2006), pp. 109–116 Y. Wang, Z. Liu, D. Wang, Y. Li, J. Yan, Anomaly detection and visual perception for landslide monitoring based on a heterogeneous sensor network. IEEE Sensors J. 17(13), 4248–4257 (2017) F. Wang, J. Liu, L. Sun, Ambient data collection with wireless sensor networks. EURASIP J. Wirel. Commun. Netw. (2010). https://doi.org/10.1155/2010/698951 P.R. Casey, K.E. Tepe, N. Kar, Design and implementation of a testbed for IEEE 802.15.4 (Zigbee) performance measurements. EURASIP J Wirel Commun Netw 23 (2010). https://doi.org/10.1155/2010/103406 A. Giorgetti, M. Lucchi, E. Tavelli, M. Barla, G. Gigli, N. Casagli, M. Chiani, D. Dardari, A robust wireless sensor network for landslide risk analysis: system design, deployment, and field testing. IEEE Sensors J. 16(16), 6374–6386 (2016) H.C. Lee, K.H. Ke, Y.M. Fang, B.J. Lee, T.C. Chan, Open-source wireless sensor system for long-term monitoring of slope movement. IEEE Trans Instrumentation and Measurement 66(4), 767–776 (2017) M.A. Weissberger, An initial critical summary of models for predicting the attenuation of radio waves by trees (Electromagnetic compatibility analysis center, Annapolis, MD, 1981) ECAC-TR-81-101 CCIR, Influences of terrain irregularities and vegetation on troposphere propagation. Geneva, Switzerland, pp. 235–236. CCIR Rep. (1986) A. Seville, K.H. Craig, Semi-empirical model for millimetre-wave vegetation attenuation rates. Electron. Lett. 31(17), 1507–1508 (1995) M.O. Al-Nuaimi, R.B.L. Stephens, Measurements and prediction model optimization for signal attenuation in vegetation media at centimetre wave frequencies. IEE Proceedings-Microwaves, Antennas and Propagation. 145(3), 201–206 (1998) COST 235, Radio Propagation Effects on Next Generation Fixed-Service Terrestrial Telecommunication Systems. Luxembourg. Final Rep. (1996) H.Y. Chen, Y.Y. Kuo, Calculation of radio loss in forest environments by an empirical formula. Microw. Opt. Technol. Lett. 31(6), 474–480 (2001) J. Liang, Q. Liang, S.W. Samn, A propagation environment modeling in foliage. EURASIP J. Wirel. Commun. Netw. (2010). https://doi.org/10.1155/2010/873070 T. Tamir, On radio-wave propagation in forest environments. IEEE Trans. Antennas Propag. 15(6), 806–817 (1967) T. Tamir, Radio wave propagation along mixed paths in forest environments. IEEE Trans. Antennas Propag. 25(4), 471–477 (1977) D. TR Rao, N.T. Balachander, M.V.S.N. Prasad, Ultra-high frequency near-ground short-range propagation measurements in forest and plantation environments for wireless sensor networks. IET Wireless Sensor Systems 3(1), 80–84 (2013) R.M.D. Islam, Y.A. Abdulrahman, T.A. Rahman, An improved ITU-R rain attenuation prediction model over terrestrial microwave links in tropical region. EURASIP J Wirel Commun Netw 189 (2012) J.D. Parsons, The mobile radio propagation channel (Wiley, 2000) Y.S. Meng, Y.H. Lee, B.C. Ng, Empirical near ground path loss modeling in a forest at VHF and UHF bands. IEEE Trans. Antennas Propag. 57(5), 1461–1468 (2009) T.S. Rappaport, Wireless Communications: Principles and Practice (Vol. 2) (New Jersey, prentice hall PTR, 1996) This research project was financial supported by National Research Council of Thailand through the 2018 Graduate Research Scholarships. Additional file: simulation and experimental results. Faculty of Information Technology, King Mongkut's University of Technology North Bangkok, Bangkok, Thailand Nattakarn Shutimarrungson & Pongpisit Wuttidittachotti Search for Nattakarn Shutimarrungson in: Search for Pongpisit Wuttidittachotti in: NS is the Ph.D. candidate who performed all the work in this paper. She is the main writer of this paper. PW is the main research supervisor of NS who helped her in fine-tuning the proposed scheme. All authors proposed the main idea, read and approved the final manuscript. Correspondence to Pongpisit Wuttidittachotti. Nattakarn Shutimarrungson received the M.Sc. degree in Information Technology Management from Mahidol University, Bangkok, Thailand in 2004. She is currently a Ph.D. candidate in information technology, Department of Information Technology, at Faculty of Information Technology, King Mongkut's University of Technology North Bangkok (KMUTNB), Thailand. Her research interests include wireless sensor networks and propagation models. Pongpisit Wuttidittachotti is currently an associate professor and head of Department of Data Communication and Networking, Faculty of Information Technology at the Faculty of Information Technology, King Mongkut's University of Technology North Bangkok (KMUTNB), Thailand. He received his Master of Science in Information Technology from KMUTNB, in 2003. He obtained a scholarship to study in France and then received a Master of Research and Ph.D. in Networks, Telecommunications, Systems and Architectures from INPT-ENSEEIHT, in 2005 and 2009 respectively. Also, he was awarded a Postdoctoral scholarship from University of Paris XI in 2009. His research interests include eHealth/mHealth, MANET, information security, 3G/4G/5G, networks performance evaluation, VoIP quality measurement and QoE/QoS. Data set of simulation and experimental results. (XLSX 60 kb) Shutimarrungson, N., Wuttidittachotti, P. Realistic propagation effects on wireless sensor networks for landslide management. J Wireless Com Network 2019, 94 (2019) doi:10.1186/s13638-019-1412-6 Propagation models Landslide management Path loss measurement
CommonCrawl
EECT Home On an inverse problem for fractional evolution equation March 2017, 6(1): 135-154. doi: 10.3934/eect.2017008 The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework Jing Zhang Department of Mathematics and Economics, Virginia State University, Petersburg, VA 23806, USA Received October 2016 Revised August 2016 Published December 2016 In this paper, we study a fluid-structure interaction model of Stokes-wave equation coupling system with Kelvin-Voigt type of damping. We show that this damped coupling system generates an analyticity semigroup and thus the semigroup solution, which also satisfies variational framework of weak solution, decays to zero at exponential rate. Keywords: Fluid-Structure Interaction, stokes equation, wave equation, Kelvin-Voigt damping, analyticity, uniform stabilization. Mathematics Subject Classification: Primary: 35M10, 35B35; Secondary: 35A01. Citation: Jing Zhang. The analyticity and exponential decay of a Stokes-wave coupling system with viscoelastic damping in the variational framework. Evolution Equations & Control Theory, 2017, 6 (1) : 135-154. doi: 10.3934/eect.2017008 G. Avalos and R. Triggiani, The coupled PDE system arising in fluid-structure interaction. Part Ⅰ: Explicit semigroup generator and its spectral properties, AMS Contemporary Mathematics, Fluids and Waves, 440 (2007), 15-55. doi: 10.1090/conm/440/08475. Google Scholar G. Avalos, I. Lasiecka and R. Triggiani, Higher regularity of a coupled parabolic-hyperbolic fluid-structure interactive system, Georgian Math. J. , Special issue dedicated to the memory of J. L. Lions, 15 (2008), 403-437. Google Scholar G. Avalos and R. Triggiani, Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface, Discr. Cont. Dynam. Sys., 22 (2008), 817-833. doi: 10.3934/dcds.2008.22.817. Google Scholar G. Avalos and R. Triggiani, Boundary feedback stabilization of a coupled parabolic-hyperbolic Stokes-Lamé PDE system, J. Evol. Eqns., 9 (2009), 341-370. doi: 10.1007/s00028-009-0015-9. Google Scholar G. Avalos and R. Triggiani, Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability, Evolution Equations and Control Theory, 2 (2013), 563-598. doi: 10.3934/eect.2013.2.563. Google Scholar W. Arendt and C. J. K. Batty, Tauberian theorems and stability of one-parameter semigroups, Transactions of the American Mathematical Society, 306 (1988), 837-852. doi: 10.1090/S0002-9947-1988-0933321-3. Google Scholar V. Barbu, Nonlinear Semigroup and Differential Equations in Banach Spaces, Springer, 1976. Google Scholar V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha, Smoothness of weak solutions to a nonlinear fluid-structure interaction model, Indiana Univ. Math. J., 57 (2008), 1773-1207. doi: 10.1512/iumj.2008.57.3284. Google Scholar V. Barbu, Z. Grujic, I. Lasiecka and A. Tuffaha, Existence of the energy-level weak solutions for a nonlinear fluid-structure interaction model, Contemporary Mathematics, 440 (2007), 55-82. doi: 10.1090/conm/440/08476. Google Scholar S. Canic, B. Muha and M. Bukac, Stability of the Kinematically Coupled $β$-Scheme for fluid-structure interaction problems in hemodynamics, International Journal for Numerical Analysis and Modeling, 12 (2015), 54-80. Google Scholar S. Chen and R. Triggiani, Proof of the extensions of two conjectures on structural damping for elastic system, Pacific Journal of Mathematics, 36 (1989), 15-55. doi: 10.2140/pjm.1989.136.15. Google Scholar S. Chen and R. Triggiani, Gevrey class semigroups arising from elastic systems with gentle dissipation: the case $ 0 < \alpha < \frac{1}{2}$, Proc. Amer. Math. Soc., 110 (1990), 401-415. doi: 10.2307/2048084. Google Scholar C. Clason, B. Kaltenbacher and S. Veljović, Boundary optimal control of the Westervelt and the Kuznetsov equation, J. Math. Anal. Appl., 356 (2009), 738-751. doi: 10.1016/j.jmaa.2009.03.043. Google Scholar D. Coutand and S. Shkoller, Motion of an elastic inside an incompressible viscous fluid, Arch. Rational Mech. Anal., 176 (2005), 25-102. doi: 10.1016/j.jmaa.2009.03.043. Google Scholar R. Denk, M. Hieber and J. Prüss, $\mathcal{R}$-boundedness, Fourier multipliers and problems of elliptic and parabolic type Memoirs Amer. Math. Soc. 166 (2003), viii+114 pp. doi: 10.1090/memo/0788. Google Scholar W. Desch, M. Hieber and J. Pruss, $L_p$ theory of the Stokes equation in a half space, J. Evolution Eqns, 1 (2001), 115-142. doi: 10.1007/PL00001362. Google Scholar W. Desch and W. Schappacher, Some perturbation results for analytic semigroups, Mathematische Annalen, 281 (1988), 157-162. doi: 10.1007/BF01449222. Google Scholar Q. Du, M. D. Gunzburger, L. S. Hou and J. Lee, Analysis of a linear-fluid structure interaction model, Discr. Dynam. Sys., 9 (2003), 633-650. doi: 10.3934/dcds.2003.9.633. Google Scholar Y. Giga, Analyticity of the semigroup generated by the Stokes operator in $L_r$ space, Mathematische Zeiscrift, 178 (1981), 297-329. doi: 10.1007/BF01214869. Google Scholar Y. Giga, Weak and strong solutions of the Navier-Stokes initial value problem, Publ. RIMS, Tokyo Univ., 19 (1983), 887-910. doi: 10.2977/prims/1195182014. Google Scholar M. Hieber and J. Prüss, Heat kernels and maximal $L^p-L^q$ estimates for parabolic evolution equations, Comm. Partial Differential Equations, 22 (1997), 1647-1669. doi: 10.1080/03605309708821314. Google Scholar B. Kaltenbacher and I. Lasiecka, Global existence and exponential decay rates for the Westervelt equation, Discr. Cont. Dynam. Sys., Series S, 2 (2009), 503-523. doi: 10.3934/dcdss.2009.2.503. Google Scholar B. Kaltenbacher, Boundary observability and stabilization for Westervelt type wave equations, Appl. Math. & Opti., 62 (2010), 381-410. doi: 10.1007/s00245-010-9108-7. Google Scholar I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions to a nonlinear fluid structure interaction system, J. Diff. Eq., 247 (2009), 1452-1478. doi: 10.1016/j.jde.2009.06.005. Google Scholar I. Kukavica, A. Tuffaha and M. Ziane, Strong solutions to a nonlinear fluid structure interaction system, Adv. Diff. Eq., 15 (2010), 231-254. doi: 10.1016/j.jde.2009.06.005. Google Scholar I. Lasiecka, Mathematical Control Theory of Coupled PDEs SIAM, 2002. doi: 10.1137/1.9780898717099. Google Scholar I. Lasiecka and Y. Lu, Asymptotic stability of finite energy in Navier Stokes-elastic wave interaction, Semigroup Forum, 82 (2011), 61-82. doi: 10.1007/s00233-010-9281-7. Google Scholar I. Lasiecka and Y. Lu, Interface feedback control stabilization to a nonlinear fluid-structure interaction model, Nonlinear Anal., 75 (2012), 1449-1460. doi: 10.1016/j.na.2011.04.018. Google Scholar I. Lasiecka and R. Triggiani, Control Theory for Partial Differential Equations: Continuous and Approximation Theories, I: Abstract Parabolic Systems, Encyclopedia of Mathematics and its Applications, 74 Cambridge University Press, 2000. Google Scholar I. Lasiecka and R. Triggiani, Heat-structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers, Communications on Pure and Applied Analysis, 15 (2016). doi: 10.3934/cpaa.2016001. Google Scholar K. Liu and Z. Liu, Analyticity and differentiability of semigroups associated with elastic systems with damping and Gyroscopitc forces, J. Diff. Eq., 141 (1997), 340-355. doi: 10.1006/jdeq.1997.3331. Google Scholar Z. Liu and S. Zheng, Semigroups Associated with Dissipative Systems Chapman & Hall/ CRC Research Notes in Mathematics, 1999. Google Scholar S. Meyer and M. Wilke, Optimal regularity and long-time behavior of solutions for the Westervelt equation, Appl. Math. and Opti., 64 (2011), 257-271. doi: 10.1007/s00245-011-9138-9. Google Scholar B. Muha and S. Canic, Existence of a weak solution to a nonlinear fluid-structure interaction problem modeling the flow of an incompressible, viscous fluid in a cylinder with deformable walls, Archives for Rational Mechanics and Analysis, 207 (2013), 919-296871. doi: 10.1007/s00205-012-0585-5. Google Scholar B. Muha and S. Canic, Existence of a solution to a fluid-multi-layered-structure interaction problem, Journal of Differential Equations, 256 (2014), 658-706. doi: 10.1016/j.jde.2013.09.016. Google Scholar N. $\ddot{O}$zkaya, M. Nordin, D. Goldsheyder and D. Leger, Fundamentals of Biomechanics-Equilibrium, Motion, and Deformation Springer-Verlag, New York, 2012. Google Scholar A. Pazy, Semigroups of Linear Operators and Applications to Partial Differential Equations Springer Verlag, 1983. doi: 10.1007/978-1-4612-5561-1. Google Scholar J. Pruss, On the spectrum of $C_0$ semigroup, Transactions of American Mathematics Society, 284 (1984), 847-857. doi: 10.2307/1999112. Google Scholar G. Simonett and M. Wilke, Well-posedness and long-time behaviour for the Westervelt equation with absorbing boundary conditions of order zero, To appear in in J. of Evol. Eqns. Google Scholar R. Triggiani, A heat-viscoelastic structure interaction model with Neumann or Dirichlet boundary control at the interface: optimal regularity, control theoretic implications, Applied Mathematics and Optimization, special issue in memory of A. V. Balakrishnan, 73(3) (2016), 571-594. doi: 10.1007/s00245-016-9348-2. Google Scholar X. Zhang and E. Zuazua, Long-time behavior of a coupled heat-wave system in fluid-structure interaction, Arch. Rat. Mech. Anal., 184 (2007), 49-120. doi: 10.1007/s00205-006-0020-x. Google Scholar Figure 1. THE FLUID–STRUCTURE INTERACTION Fathi Hassine. Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1757-1774. doi: 10.3934/dcdsb.2016021 Serge Nicaise, Cristina Pignotti. Stability of the wave equation with localized Kelvin-Voigt damping and boundary delay feedback. Discrete & Continuous Dynamical Systems - S, 2016, 9 (3) : 791-813. doi: 10.3934/dcdss.2016029 Louis Tebou. Stabilization of some elastodynamic systems with localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems, 2016, 36 (12) : 7117-7136. doi: 10.3934/dcds.2016110 George Avalos, Roberto Triggiani. Uniform stabilization of a coupled PDE system arising in fluid-structure interaction with boundary dissipation at the interface. Discrete & Continuous Dynamical Systems, 2008, 22 (4) : 817-833. doi: 10.3934/dcds.2008.22.817 Mohammad Akil, Ibtissam Issa, Ali Wehbe. Energy decay of some boundary coupled systems involving wave\ Euler-Bernoulli beam with one locally singular fractional Kelvin-Voigt damping. Mathematical Control & Related Fields, 2021 doi: 10.3934/mcrf.2021059 Mehdi Badra, Takéo Takahashi. Feedback boundary stabilization of 2d fluid-structure interaction systems. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2315-2373. doi: 10.3934/dcds.2017102 Qiang Du, M. D. Gunzburger, L. S. Hou, J. Lee. Analysis of a linear fluid-structure interaction problem. Discrete & Continuous Dynamical Systems, 2003, 9 (3) : 633-650. doi: 10.3934/dcds.2003.9.633 Robert E. Miller. Homogenization of time-dependent systems with Kelvin-Voigt damping by two-scale convergence. Discrete & Continuous Dynamical Systems, 1995, 1 (4) : 485-502. doi: 10.3934/dcds.1995.1.485 Andro Mikelić, Giovanna Guidoboni, Sunčica Čanić. Fluid-structure interaction in a pre-stressed tube with thick elastic walls I: the stationary Stokes problem. Networks & Heterogeneous Media, 2007, 2 (3) : 397-423. doi: 10.3934/nhm.2007.2.397 Henry Jacobs, Joris Vankerschaver. Fluid-structure interaction in the Lagrange-Poincaré formalism: The Navier-Stokes and inviscid regimes. Journal of Geometric Mechanics, 2014, 6 (1) : 39-66. doi: 10.3934/jgm.2014.6.39 Miroslav Bulíček, Josef Málek, K. R. Rajagopal. On Kelvin-Voigt model and its generalizations. Evolution Equations & Control Theory, 2012, 1 (1) : 17-42. doi: 10.3934/eect.2012.1.17 Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2021, 11 (4) : 885-904. doi: 10.3934/mcrf.2020050 George Avalos, Roberto Triggiani. Semigroup well-posedness in the energy space of a parabolic-hyperbolic coupled Stokes-Lamé PDE system of fluid-structure interaction. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 417-447. doi: 10.3934/dcdss.2009.2.417 Grégoire Allaire, Alessandro Ferriero. Homogenization and long time asymptotic of a fluid-structure interaction problem. Discrete & Continuous Dynamical Systems - B, 2008, 9 (2) : 199-220. doi: 10.3934/dcdsb.2008.9.199 Serge Nicaise, Cristina Pignotti. Asymptotic analysis of a simple model of fluid-structure interaction. Networks & Heterogeneous Media, 2008, 3 (4) : 787-813. doi: 10.3934/nhm.2008.3.787 Igor Kukavica, Amjad Tuffaha. Solutions to a fluid-structure interaction free boundary problem. Discrete & Continuous Dynamical Systems, 2012, 32 (4) : 1355-1389. doi: 10.3934/dcds.2012.32.1355 George Avalos, Roberto Triggiani. Fluid-structure interaction with and without internal dissipation of the structure: A contrast study in stability. Evolution Equations & Control Theory, 2013, 2 (4) : 563-598. doi: 10.3934/eect.2013.2.563 Xiaorui Wang, Genqi Xu. Uniform stabilization of a wave equation with partial Dirichlet delayed control. Evolution Equations & Control Theory, 2020, 9 (2) : 509-533. doi: 10.3934/eect.2020022 Irena Lasiecka, Roberto Triggiani. Heat--structure interaction with viscoelastic damping: Analyticity with sharp analytic sector, exponential decay, fractional powers. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1515-1543. doi: 10.3934/cpaa.2016001 Oualid Kafi, Nader El Khatib, Jorge Tiago, Adélia Sequeira. Numerical simulations of a 3D fluid-structure interaction model for blood flow in an atherosclerotic artery. Mathematical Biosciences & Engineering, 2017, 14 (1) : 179-193. doi: 10.3934/mbe.2017012
CommonCrawl
Analyzing and solving the identifiability problem in the exponentiated generalized Weibull distribution Felipe R. S. de Gusmão1, Frank Gomes-Silva ORCID: orcid.org/0000-0002-3481-30991, Cícero C. R. de Brito2, Fábio V. J. Silveira1, Jader S. Jale1, Sílvio F. A. Xavier-Júnior3 & Pedro R. D. Marinho4 The well-known Weibull distribution can be used to model the decreasing and unimodal failure rate quite standard in reliability and biological studies. It is also commonly adopted as baseline to generate new distributions from generalized classes. In this paper, we investigate the identifiability of the exponentiated generalized class of distributions and in particular the exponentiated generalized Weibull distribution. We also develop conditions under which the model becomes identifiable. To further illustrate the identifiability issue, we consider a simulation study, and an application is presented to illustrate the potentialities of the model with the new parameterization. Lately, many authors have proposed new classes of distributions, which are modifications of the cumulative distribution functions (cdf) that provide hazard rate functions (hrf) taking various shapes. We can cite the exponentiated Weibull (\(\mathcal {EW}\))[1, 21, 22], which has an upside-down bathtub (unimodal) hrf form [2]. Carrasco et al. [3] showed a four-parameter distribution denoted generalized modified Weibull distribution whose hrf exhibits non-monotonic shapes such as a bathtub and upside-down bathtub; Gusmão et al. [4] introduced and studied the tri-parametric inverse Weibull generalized distribution that possesses failure rate with unimodal, increasing and decreasing form. Several families proposed in the literature comprise a source of probability distributions for modeling lifetime data, since, in general, the resulting distribution and the baseline have the same support. Cordeiro et al. [5] proposed a new family, the exponentiated generalized (\(\mathcal {EG}\)) class of distributions, to generalize other distributions. Considering that a random variable T has distribution G, they suggest applying the new class of distributions to generalize any distribution G by $$\begin{aligned} F_G(t; a, b)= \left\{ 1 - \left[ 1 - G(t)\right] ^a\right\} ^b, \end{aligned}$$ where \(a > 0\) and \(b > 0\) are two additional shape parameters. The authors point out that the new class of distributions is simpler and more tractable than the generalized beta family [6]. The quantile function (qf) of the new class has closed form. It entails that simulations regarding (1) are easier to perform. The following well-known baseline distributions have been discussed in recent works for the exponentiated generalized class [5] (this list is not exhaustive): Birnbaum–Saunders distribution [7], generalized gamma distribution [8], Gumbel distribution [9], Dagum distribution [10], Weibull distribution [11], extended exponential distribution [12], arcsine distribution [13], standardized half-logistic distribution [14], extended Pareto distribution [15] standardized Gumbel distribution [16] and extended Gompertz [17]. It is well-known that the addition of parameters to distribution classes can lead to identifiability problems and consequently bring complications to the estimation of parameters in the proposed model. According to [18], a parameter \(\varvec{\theta }\) for a family of distributions \(\left\{ f\left( x, \varvec{\theta } \right) : \varvec{\theta } \in \varvec{\varTheta } \right\}\) is identifiable if different values of \(\varvec{\theta }\) correspond to different probability density functions (pdf) or probability mass functions. That is, if \(\varvec{\theta } \ne \varvec{\theta '}\), then \(f\left( x, \varvec{\theta } \right) \ne f\left( x, \varvec{\theta '} \right)\). Jones et al. [19] define identifiability as follows: Consider a stack of probabilities \(p_{1},...,p_{n}\), \(n \in {\mathbb {N}}\), within a single vector \(\varvec{\psi }\) with dimensions \(q \times 1\) and the parametric model with a vector \(\varvec{\gamma }\) with dimensions \(r \times 1\). The presented model, implicitly specifies, a function F that determines how \(\varvec{\psi }\) is calculated from \(\varvec{\gamma }\), $$\begin{aligned} \varvec{\psi }=F\left( \varvec{\gamma }\right) . \end{aligned}$$ Hence, the model will be identifiable if F is an invertible function; it follows that there is a one-to-one correspondence between \(\varvec{\gamma }\) and \(\varvec{\psi }\). If \(\varvec{\gamma }_{1}\ne \varvec{\gamma }_{2}\) and \(F\left( \varvec{\gamma }_{1}\right) = F\left( \varvec{\gamma }_{2}\right)\), the model will have identifiability problems. Nevertheless, Jones et al. [19] state that the model will be locally identifiable in a particular \(\varvec{\gamma }\) if F is an invertible function in the vicinity of \(\varvec{\gamma }\). In a review paper on statistical identifiability, Paulino and Pereira [20] studied issues like parallelism between parametric identifiability and sample sufficiency. They also discussed how identifiability, measures of sample information and inferential estimation concepts are related. Additionally, classic and Bayesian methods were considered as strategies for making inferences on models with parametric identification problems. Based on the aforementioned ideas and considering the relation between the parameters of the exponentiated generalized class of distributions and the baseline function, we used the Weibull distribution as a candidate for G. Using Eq. (1) and performing some mathematical manipulations, we obtain a parameterization for the exponentiated generalized Weibull (\(\mathcal {EGW}\)) distribution that was introduced by [11]. It was also studied by [1, 21, 22]. This paper aims to study the similarities that evince the problem of identifiability of the \(\mathcal {EGW}\) distribution. The \(\mathcal {EGW}\) distribution and a study on identifiability The Weibull distribution has received considerable attention in the statistical literature. Many authors have studied the shapes of the density and failure rate functions for the basic model of the Weibull distribution. Let T be a random variable with Weibull distribution, then its cdf can be written as: $$\begin{aligned} G(t) =1- \exp \left[ - \left( \alpha t \right) ^{\beta }\right] , \quad t > 0, \end{aligned}$$ where \(\alpha > 0\), \(\beta > 0\). Replacing G(t) in Eq. (1) by (2), we have $$\begin{aligned} F_{\mathcal {EGW}}(t; a, b, \alpha , \beta ) = \left\{ 1- \exp \left[ - a \left( \alpha t \right) ^\beta \right] \right\} ^b \end{aligned}$$ where \(F_{\mathcal {EGW}}(\cdot )\) is the \(\mathcal {EGW}\) cdf. The pdf is given by $$\begin{aligned} f_{\mathcal {EGW}}(t; a, b, \alpha , \beta ) = a\, b\, \beta \, \alpha ^{\beta }\, t^{\beta -1} \exp \left[ - a \left( \alpha t \right) ^\beta \right] \left\{ 1- \exp \left[ - a \left( \alpha t \right) ^\beta \right] \right\} ^{b-1}, \end{aligned}$$ where \(\varvec{\theta }=\left( a, b, \alpha , \beta \right)\) is the vector of parameters of \(F_{\mathcal {EGW}}\left( t; a, b, \alpha , \beta \right)\). Consider that \(\varvec{\varTheta }_{\mathcal {EGW}}\) is the parametric space of the \(\mathcal {EGW}\) distribution, \(\varGamma\) is a specific set of indices and \(\varvec{\theta }_{i}=\left( a_{i}, b_{i}, \alpha _{i}, \beta _{i} \right) \in \varvec{\varTheta }_{\mathcal {EGW}}\) where \(a_{i}, b_{i}, \alpha _{i}, \beta _{i}>0\) for all \(i \in \varGamma\). Let \(F_{\varvec{\varTheta }_{\mathcal {EGW}}}=\left\{ F_{\mathcal {EGW}}\left( t; \varvec{\theta }_{i}\right) : \varvec{\theta }_{i} \in \varvec{\varTheta }_{\mathcal {EGW}}, \forall i \in \varGamma \right\}\) be a family of cdfs of the \(\mathcal {EGW}\) distribution. Given that \(i \ne j\) for all \(i,j \in \varGamma\), if \(\varvec{\theta }_{i} \ne \varvec{\theta }_{j} \Rightarrow F_{\mathcal {EGW}}\left( t; \varvec{\theta }_{i}\right) = F_{\mathcal {EGW}}\left( t; \varvec{\theta }_{j}\right)\), we say that \(\varvec{\varTheta }_{\mathcal {EGW}}\) is not identifiable. Let \(\varvec{\theta }_{i}\) and \(\varvec{\theta }_{j}\) be such that \(\varvec{\theta }_{i} \ne \varvec{\theta }_{j}\) with \(a_{i} \ne a_{j}\), \(b_{i} = b_{j}=b\), \(\alpha _{i} \ne \alpha _{j}\) and \(\beta _{i} = \beta _{j}=\beta\). Then, by hypothesis, we have that $$\begin{aligned} \alpha _{i} \ne \alpha _{j} \Rightarrow \left( \alpha _{i} t\right) ^{\beta } \ne \left( \alpha _{j} t\right) ^{\beta }. \end{aligned}$$ Take \(a_{i}=\frac{a_{j} \alpha _{j}^{\beta }}{\alpha _{i}^{\beta }}\), then $$\begin{aligned} \exists \quad a_{i} \ne a_{j} : a_{i} \left( \alpha _{i} t\right) ^{\beta } = a_{j} \left( \alpha _{j} t\right) ^{\beta } \Rightarrow F_{\mathcal {EGW}}\left( t; \varvec{\theta }_{i}\right) = F_{\mathcal {EGW}}\left( t; \varvec{\theta }_{j}\right) . \end{aligned}$$ Therefore, the \(\varvec{\varTheta }_{\mathcal {EGW}}\) is not identifiable. The \(\mathcal {EW}\) distribution and a study on identifiability The reparameterization performed on the parameters \(\alpha a^{\frac{1}{\beta }}\) solves the problem of identifiability, see the work of [23], where a is the parameter recently introduced. Without this reparameterization various values of a and \(\alpha\) satisfy the relation \(c=a \alpha ^{\beta }\) for fixed value of c. With the cited relation it is possible to rewrite Eq. (3), obtaining the \(\mathcal {EW}\) cdf: $$\begin{aligned} F_{\mathcal {EW}}(t; b, c, \beta ) = \left\{ 1-\exp \left[ -(c t)^{\beta } \right] \right\} ^b, \end{aligned}$$ wherein \(b > 0\) is the shape parameter, and \(c > 0\) is the scale parameter. Hence, the \(\mathcal {EW}\) distribution has three parameters, and its pdf is given by $$\begin{aligned} f_{\mathcal {EW}}(t; b, c, \beta ) = \beta \, b\, c^\beta \, t^{\left( \beta -1\right) } \exp \left[ -(c t)^{\beta } \right] \left\{ 1-\exp \left[ -(c t)^{\beta } \right] \right\} ^{b-1}. \end{aligned}$$ Consider that \(\varvec{\varTheta }_{\mathcal {EW}}\) is the parametric space of the \(\mathcal {EW}\) distribution, \(\varGamma\) is a specific set of indices and \(\varvec{\theta }_{i}=\left( b_{i}, c_{i}, \beta _{i} \right) \in \varvec{\varTheta }_{\mathcal {EW}}\) where \(b_{i}, c_{i}, \beta _{i}>0\) for all \(i \in \varGamma\). Let \(F_{\varvec{\varTheta }_{\mathcal {EW}}}=\left\{ F_{\mathcal {EW}}\left( t; \varvec{\theta }_{i}\right) : \varvec{\theta }_{i} \in \varvec{\varTheta }_{\mathcal {EW}}, \forall i \in \varGamma \right\}\) be a family of cdfs of the \(\mathcal {EW}\) distribution. Given that \(i \ne j\) for all \(i,j \in \varGamma\), if \(\varvec{\theta }_{i} \ne \varvec{\theta }_{j} \Rightarrow F_{\mathcal {EW}}\left( t; \varvec{\theta }_{i}\right) = F_{\mathcal {EW}}\left( t; \varvec{\theta }_{j}\right)\), we say that \(\varvec{\varTheta }_{\mathcal {EW}}\) is not identifiable. The vector \(\varvec{\theta }_{i}\) differs from \(\varvec{\theta }_{j}\) in seven ways. Next, consider Case 1. Let \(\varvec{\theta }_{i}\) and \(\varvec{\theta }_{j}\) such that \(\varvec{\theta }_{i} \ne \varvec{\theta }_{j}\) with \(b_{i} \ne b_{j}\), \(c_{i} = c_{j} = c\) and \(\beta _{i} = \beta _{j} = \beta\). Then, from this hypothesis, we have the following chain of implications: $$\begin{aligned}&b_{i} \ne b_{j} \Rightarrow \left\{ 1-\exp \left[ -(c t)^{\beta } \right] \right\} ^{b_{i}} \ne \left\{ 1-\exp \left[ -(c t)^{\beta } \right] \right\} ^{b_{j}} \\&\qquad \Rightarrow F_{\mathcal {EW}}\left( t; \varvec{\theta }_{i}\right) \ne F_{\mathcal {EW}}\left( t; \varvec{\theta }_{j}\right) . \end{aligned}$$ Table 1 summarizes the proof of identifiability for each of the other cases from the hypothesis, and also displays its appropriate implications. Table 1 Proof that \(\varvec{\varTheta }_{\mathcal {EW}}\) is identifiable Therefore, the \(\varvec{\varTheta }_{\mathcal {EW}}\) is identifiable. Note that \(F_{\mathcal {EGW}}\) and \(F_{\mathcal {EW}}\) are equal functions, as long as they have the same domain and image set. However, \(F_{\mathcal {EW}}\) as an identifiable cdf has reliable estimation which is quite different from \(F_{\mathcal {EGW}}\). Let \(F_{\mathcal {EGW}}\left( t; \varvec{\theta }\right)\) for all \(t > 0\) and \(\varvec{\theta }=\left( a, b, \alpha , \beta \right)\). Hence, $$\begin{aligned} F_{\mathcal {EGW}}\left( t; \varvec{\theta }\right) =\left\{ 1-\exp \left[ -a(\alpha t)^{\beta } \right] \right\} ^b =\left\{ 1-\exp \left[ -a \alpha ^{\beta } t^{\beta } \right] \right\} ^b. \end{aligned}$$ Let \(c^{\beta }=a \alpha ^{\beta }\) where \(c > 0\), hence we have that $$\begin{aligned} F_{\mathcal {EGW}}\left( t; \varvec{\theta }\right) =\left\{ 1-\exp \left[ -c^{\beta } t^{\beta } \right] \right\} ^b =\left\{ 1-\exp \left[ -\left( c t\right) ^{\beta } \right] \right\} ^b =F_{\mathcal {EW}}\left( t; \varvec{\theta }' \right) \end{aligned}$$ where \(\varvec{\theta }'=\left( b, c, \beta \right)\). Therefore, \(F_{\mathcal {EGW}}\left( t; \varvec{\theta }\right) =F_{\mathcal {EW}}\left( t; \varvec{\theta }' \right)\) for all \(t > 0\). Monte Carlo simulations based on \(\mathcal {EGW}\) and \(\mathcal {EW}\) models Computational experiments play an important role in probability and statistics since they can verify the validity of a hypothesis, examine the performance of something new or demonstrate a known truth. In this section, we present the estimates of the parameters under the maximum likelihood method for the \(\mathcal {EGW}\) and \(\mathcal {EW}\) models. They were obtained via BFGS, SANN, and Nelder–Mead, implemented in R OPTIM function [24]. For this, we implemented two other functions to automate the simulations: fitDist and getSimulation. The pseudo-codes of those algorithms as well as these functions can be seen in "Appendix." Nowadays, with the available computational resources, such as parallel processing of many cores and multiple processes, it is possible speed-up the results of the computational simulations. Therefore, we run the simulations on parallel processes to explore the high-performance computing and runtime optimization. Thus, the results of the simulations as well as their execution times were gathered from a notebook Intel® Core\(^{{\mathrm{TM}}}\) i5-7200U, CPU 2.50 GHz, 2712 Mhz, 2 cores, 4 logical processors, RAM 8.00 GB, Microsoft® Windows 10 Home Single Language, X64 system, R\(^{\copyright }\) version 3.6.1, and RStudio\(^{\copyright }\) version 1.2.5001. Simulation for the \(\mathcal {EGW}\) distribution Samples of size \(50,\, 100,\, 500\,\,\, \text{ and } \,\,\,1000\) were obtained using the \(\mathcal {EGW}\) qf given by $$\begin{aligned} Q_{\mathcal {EGW}}\left( q\right) =\left\{ \log \left[ 1-q^{\frac{1}{b}}\right] ^{-\left( \frac{1}{a \alpha ^\beta }\right) } \right\} ^{\frac{1}{\beta }}, \end{aligned}$$ where q takes random values from a \(U\left( 0,1\right)\), adopting \(a=2\), \(b=3\), \(\alpha =4\) and \(\beta =5\). The estimates were acquired by the maximum likelihood method via BFGS, SANN, and Nelder–Mead. Figures 1 and 2 display the histogram from simulated data of the \(\mathcal {EGW}\) distribution with density for the \(\mathcal {EGW}\) distribution and the empirical distribution for data set size of \(50,\, 100,\, 500\,\,\, \text{ and }\,\,\, 1000\). The histogram was obtained using the qf of the \(\mathcal {EGW}\) distribution, and the algorithms BFGS, SANN, and Nelder–Mead obtained estimates via MLE. Estimated densities for \(\mathcal {EGW}\) distribution and the distribution of the empirical values for the sets of simulated data of sizes 50 and 100 Estimated densities for \(\mathcal {EGW}\) distribution and the distribution of the empirical values for the sets of simulated data of sizes 500 and 1000 Next, we present the results of the parameter estimation using the \(\mathcal {EGW}\) distribution. The BFGS method for estimating parameter a proved to be inefficient, even with the increase in the number of simulated data. For parameter b, the estimates showed reasonable results for 500 and 1000 simulated data. However, the method was not satisfactory regarding the \(\alpha\) parameter. Finally, a reasonable result was obtained for the \(\beta\) parameter only for 1000 simulated data. Regarding the SANN method, the estimation was inefficient for the parameters a and \(\alpha\). The estimates for parameter b were reasonable only from 500 simulated data. For the \(\beta\) parameter, there was a reasonable estimate only when 1000 simulated data was reached. The Nelder–Mead method did not give satisfactory results for the estimation of parameters a and \(\alpha\). However, it presented a reasonable estimate for parameter b from 500 simulated data, as well as for the \(\beta\) parameter, but only for 1000 simulated data. In the simulations concerning the estimation of the parameters of the \(\mathcal {EGW}\) distribution, we obtained 81.25% (39/48) of inefficient estimates, 18.75% (9/48) of reasonable estimates and none satisfactory. The graphs of all methods showed equivalent adjustments; more details are available in "Appendix." See Table 2 including the standard error (SE) and the mean squared error (MSE) and Figs. 1, 2. Simulation for \(\mathcal {EW}\) distribution Although it is a well-known model and numerous other models generalize it, to our knowledge, simulation studies have not been carried out with the \(\mathcal {EW}\) distribution. Samples of size 50, 100, 500, and 1000 were obtained using the qf of the \(\mathcal {EW}\) distribution. The results of the simulations are presented in Table 3. The \(\mathcal {EW}\) qf is given by $$\begin{aligned} Q_{EW}\left( q\right) =\left\{ \log \left[ 1-q^{\frac{1}{b}}\right] ^ {-\left( \frac{1}{c^\beta }\right) } \right\} ^{\frac{1}{\beta }}, \end{aligned}$$ where q takes random values from a \(U\left( 0,1\right)\) adopting \(b=3\), \(c=4\) , and \(\beta =5\). We obtain points of the \(\mathcal {EW}\) distribution given by (8). Figures 3 and 4 present the histogram from simulated data of the \(\mathcal {EW}\) distribution density and the empirical distribution for data size of 50, 100, 500, and 1000 using the \(\mathcal {EW}\) qf, and BFGS, SANN, and Nelder–Mead performed the estimates via MLE. Estimated densities for \(\mathcal {EW}\) distribution and the distribution of the empirical values for the sets of simulated data of sizes 50 and 100 Estimated densities for \(\mathcal {EW}\) distribution and the distribution of the empirical values for the sets of simulated data of sizes 500 and 1000 The estimation of the parameters of the \(\mathcal {EW}\) distribution presented the following results. For the BFGS method, with only 1000 simulated data, there was a reasonable result in estimating parameter b. Regarding parameter c, with 500 simulated data, we observed a reasonable estimate. However, for 1000 observations, the BFGS method had a satisfactory result. Regarding the \(\beta\) parameter, the estimates were reasonable only from 500 simulated data. With respect to the SANN method, the estimates for parameter b were reasonable only for 1000 simulated data. For parameter c, there was a reasonable estimate for 500 simulated data. However, for 1000 simulated data, the estimation was satisfactory. For 500 simulated data onwards, the \(\beta\) parameter estimates were reasonable. Finally, for the Nelder–Mead method, the estimation of parameter b was reasonable only for 1000 simulated data. The estimates for parameter c were reasonable and satisfactory, for 500 and 1000 simulated data, respectively. From 500 simulated data, the estimates for the \(\beta\) parameter were reasonable. For the simulations generated for the \(\mathcal {EW}\) distribution, we obtained 58.33% (21/36) inefficient estimates, 33.34% (12/36) reasonable and 8.33% (3/36) satisfactory. Thus, we can observe that the identifiability (reparameterization) of the \(\mathcal {EW}\) distribution provided better results in the simulations, as it decreased the amount of inefficient estimates (81.25% \(\rightarrow\) 58.33%) and increased the amount of reasonable estimates (18.75% \(\rightarrow\) 33.34%) and satisfactory (0% \(\rightarrow\) 8.33%). The ratio between the execution times (in seconds) of the simulations of the \(\mathcal {EGW}\) and \(\mathcal {EW}\) distributions were as follows: 61,052/31845 (1.92), 164,702/55,106 (2.99), 397,079/231,317 (1.72), and 590,006/390,454 (1.51). These results show that the \(\mathcal {EW}\) distribution requires a much shorter execution time. Thus, the identifiability of the \(\mathcal {EW}\) distribution has the additional advantage of optimizing the time for running computer simulations. Application with the \(\mathcal {EGW}\) distribution and the \(\mathcal {EW}\) distribution In this section, we analyze a real data set of Nelore cattle [25] using the \(\mathcal {EGW}\) distribution and the \(\mathcal {EW}\) distribution. The algorithms of BFGS, SANN, and Nelder–Mead performed the maximum likelihood estimates. The commercial production of beef in Brazil, which mostly originates from the Nelore breed, searches to optimize the process to obtain a time for the calves to reach the specific weight from their birth to weaning. We observed the data with 69 Nelore bulls, the time (in days) until the animals achieved the weight of 160kg relative to the period from birth to weaning. a Estimated density for \(\mathcal {EGW}\) distribution and the empirical distribution for the set of Nelore data. b Estimated survival function for \(\mathcal {EGW}\) distribution and the Kaplan–Meier distribution for the set of Nelore data with a confidence interval of 0.95 a Estimated density for \(\mathcal {EW}\) distribution and the empirical distribution for the set of Nelore data. b Estimated survive function for \(\mathcal {EW}\) distribution and the Kaplan–Meier distribution for the set of Nelore data with a confidence interval of 0.95 Figure 5 exhibits the results obtained for \(\mathcal {EGW}\) such as the plot 5 and the parameters estimation table (Table 4 in "Appendix"). One can note that the BFGS method performed a better fit concerning the empirical function and to the histogram than the other methods proposed in this article. Analyzing the plots in Fig. 6 and the results Tables (see the Table 5 in "Appendix"), it is observed that the Nelder–Mead method adjusted the \(\mathcal {EW}\) distribution, concerning the histogram and the empirical function, better than the other methods. Notwithstanding, the estimation of the parameters by the Nelder–Mead method did not produce results for the SE of parameters b and c. Hence, as the estimation of the parameters by the BFGS method was the second-best fit, and the results were also produced for the SE for the parameters b, c and \(\beta\) one can consider that the BFGS method performed the most suitable adjustment for the data via \(\mathcal {EW}\) distribution. Table 4 (in "Appendix") shows that the Nelder–Mead method was able to perform the estimation of the parameters of the \(\mathcal {EGW}\) distribution, but there was failure to report the SE, since the produced Hessian returned NaN (abbreviation for Not a Number) for the first row and the first column, whose information refers to the parameter a. This suggests that the solution found by the Nelder–Mead method is not reliable, in this case, and consequently, that the model adjusted by the estimates of the parameters found is not suitable for these data. This fact may be related to the lack of identifiability of the \(\mathcal {EGW}\) distribution. In this study, we presented a technique to reduce the parameters of the exponentiated generalized Weibull distribution \(\mathcal {(EGW)}\). Additionally, we identified that the exponentiated Weibull distribution \(\mathcal {(EW)}\) displayed more parsimony and identifiability in the parameters than the \(\mathcal {EGW}\). The performances of the two distributions were analyzed using simulated and a real dataset; the \(\mathcal {EW}\) performed slightly better with simulated data and lightly worse with real data. All data generated or analyzed during this study are included in this published article. cdf: hrf: hazard rate function \(\mathcal {EW}\):: exponentiated Weibull \(\mathcal {EGW}\):: exponentiated generalized Weibull qf: quantile function BFGS : Brogden–Fletcher–Golfarb–Shanno SANN : simulated annealing SE : MSE : mean squared error Mudholkar, G.S., Srivastava, D.K., Freimer, M.: The exponentiated Weibull family: a reanalysis of the bus-motor-failure data. Technometrics 37(4), 436–445 (1995) Xie, M., Lai, C.D.: Reliability analysis using an additive Weibull model with bathtub-shaped failure rate function. Reliab. Eng. Syst. Saf. 52(1), 87–93 (1996) Carrasco, J.F., Ortega, E.M.M., Cordeiro, G.M.: A generalized modified Weibull distribution for lifetime modeling. Comput. Stat. Data Anal. 53(2), 450–462 (2008) Gusmão, F.R.S., Ortega, E.M.M., Cordeiro, G.M.: The generalized inverse Weibull distribution. Stat. Pap. 52, 591–619 (2011) Cordeiro, G.M., Ortega, E.M.M., Cunha, D.C.: The exponentiated generalized class of distributions. J. Data Sci. 11, 1–27 (2013) Eugene, N., Lee, C., Famoye, F.: Beta-normal distribution and its applications. Commun. Stat. Theory Methods 31, 497–512 (2002) Cordeiro, G.M., Lemonte, A.J.: The exponentiated generalized Birnbaum–Saunders distribution. Appl. Math. Comput. 247, 762–779 (2014) Silva, R., Gomes-Silva, F., Ramos, M., Cordeiro, G.M.: A new extended gamma generalized model. Int. J. Pure Appl. Math. 100(2), 309–335 (2015) Andrade, T., Rodrigues, H., Bourguignon, M., Cordeiro, G.M.: The exponentiated generalized Gumbel distribution. Rev. Colomb. Estad. 38(1), 123–143 (2015) Gomes-Silva, F., Silva, R., Percontini, A., Ramos, M., Cordeiro, G.M.: An extended Dagum distribution: properties and applications. Int. J. Appl. Math. Stat. 56, 35–56 (2017) Oguntunde, P.E., Odetunmibi, O.A., Adejumo, A.O.: On the exponentiated generalized Weibull distribution: a generalization of the Weibull distribution. Indian J. Sci. Technol. 8(35), 67611 (2015) Andrade, T.A.N., Bourguignon, M., Cordeiro, G.M.: The exponentiated generalized extended exponential distribution. J. Data Sci. 14, 393–414 (2016) Cordeiro, G.M., Lemonte, A., Campelo, A.K.: Extended arcsine distribution to proportional data: properties and applications. Stud. Sci. Math. Hung. 53, 440–466 (2017) Cordeiro, G.M., Andrade, T.A.N., Bourguignon, M., Gomes-Silva, F.: The exponentiated generalized standardized half-logistic distribution. Int. J. Stat. Probab. 6, 24–42 (2017) Andrade, T., Zea, L.: The exponentiated generalized extended pareto distribution. J. Data Sci. 16, 781–800 (2018) Andrade, T., Gomes-Silva, F., Zea, L.: Mathematical properties, application and simulation for the exponentiated generalized standardized Gumbel distribution. Acta Sci. Technol. 41, 1807–8664 (2019) Andrade, T., Chakraborty, S., Handique, L., Gomes-Silva, F.: The exponentiated generalized extended Gompertz distribution. J. Data Sci. 17, 299–330 (2019) Casella, G., Berger, R.L.: Statistical Inference. Duxbury Press, Belmont (2001) Jones, G., Johnson, W.O., Hanson, T.E., Christensen, R.: Identifiability of models for multiple diagnostic testing in the absence of a gold standard. Biometrics 66(3), 855–863 (2010) Paulino, C.D., Pereira, C.A.B.: On identifiability of parametric statistical models. Stat. Methods Appl. J. Ital. Stat. Soc. 3, 125–151 (1994) Mudholkar, G.S., Srivastava, D.K.: Exponentiated Weibull family for analyzing bathtub failure-rate data. IEEE Trans. Reliab. 42(2), 299–302 (1993) Jiang, R., Murthy, D.N.P.: The exponentiated Weibull family: a graphical approach. IEEE Trans. Reliab. 48(1), 68–72 (1999) Gusmão, F.R.S., Ortega, E.M.M., Cordeiro, G.M.: Reply to the Letter to the Editor of M. C. Jones. Stat. Pap. 53, 252–254 (2012) R Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna (2012) Colosimo, E.A., Giolo, S.R.: Análise de sobrevivência aplicada. Edgar Blucher, São Paulo (2006) Broyden, C.G.: The convergence of a class of double-rank minimization algorithms 1. General considerations. IMA J. Appl. Math. 6(1), 76–90 (1970) Fletcher, R.: A new approach to variable metric algorithms. Comput. J. 13(3), 317–322 (1970) Goldfarb, D.: A family of variable metric updates derived by variational means. Math. Comput. 24(109), 23–26 (1970) Shanno, D.F.: Conditioning of quasi-Newton methods for function minimization. Math. Comput. 24(111), 647–656 (1970) Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H., Teller, E.: Equation of state calculations by fast computing machines. J. Chem. Phys. 21(6), 1087–1092 (1953) Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220(4598), 671–680 (1983) Bélisle, C.J.P.: Convergence theorems for a class of simulated annealing algorithms on \({\cal{R}}^d\). J. Appl. Probab. 29(4), 885–895 (1992) Cortez, P.: Modern Optimization with R. Springer, Berlin (2014) Nelder, J.A., Mead, R.: A simplex method for function minimization. Comput. J. 7(4), 308–313 (1965) Givens, G.H., Hoeting, J.A.: Computational Statistics, 2nd edn. Wiley, London (2013) No funding was received. DEINFO, Universidade Federal Rural de Pernambuco, Recife, Brazil Felipe R. S. de Gusmão, Frank Gomes-Silva, Fábio V. J. Silveira & Jader S. Jale Instituto Federal de Pernambuco, Campus Recife, Recife, Brazil Cícero C. R. de Brito Departamento de Estatística, Universidade Estadual da Paraíba, Campina Grande, Brazil Sílvio F. A. Xavier-Júnior Departamento de Estatística, Universidade Federal da Paraíba, João Pessoa, Brazil Pedro R. D. Marinho Felipe R. S. de Gusmão Frank Gomes-Silva Fábio V. J. Silveira Jader S. Jale FRSG (was a major contributor in writing the manuscript) involved in writing—original draft, methodology and investigation; FGS involved in writing—review and editing, supervision and validation; CCRB involved in visualization, supervision and validation; FVJS involved in methodology and visualization; JSJ involved in methodology and software; SFAXJ involved in writing—review and editing and visualization; PRDM involved in methodology and software. All authors read and approved the final manuscript. Correspondence to Frank Gomes-Silva. The authors declare that they have no competing interests See Tables 2, 3, 4 and 5. Table 2 MLE estimates for the parameters of \(\mathcal {EGW}\) distribution with simulated data from \(\mathcal {EGW}\) distribution via BFGS, SANN, and Nelder–Mead algorithms Table 3 MLE estimates for the parameters of \(\mathcal {EW}\) distribution with simulated data from \(\mathcal {EW}\) distribution via BFGS, SANN, and Nelder–Mead algorithms Table 4 MLE estimates for the parameters of \(\mathcal {EGW}\) distribution with Nelore data via BFGS, SANN, and Nelder–Mead algorithms Table 5 MLE estimates for the parameters of \(\mathcal {EW}\) distribution with Nelore data via BFGS, SANN, and Nelder–Mead algorithms BFGS algorithm Henceforward, the following notation is used: p is the number of parameters to be estimated, \(\varvec{\theta }=(\theta _1, \ldots , \theta _p)^{\top }\in \varTheta\) is the vector of unknown parameters, \(\varvec{\theta _0}=(\theta _{01}, \ldots , \theta _{0p})^{\top }\in \varTheta\) the initial guess solution, f the objective function (minimization by default) representing the log-likelihood function \(\ell (\varvec{\theta }\vert x)\), and x the dataset. The BFGS is a Quasi-Newton second-derivative line search method used to solve unconstrained optimization problems. Algorithm 1 shows the pseudo-code of the BFGS algorithm [26,27,28,29]. SANN algorithm Annealing is the physical process used to melt metals, which are heated to high temperatures and then cooled slowly, producing a homogeneous material. The simulated annealing (SA) algorithm was originally proposed by [30], being developed later by [31] in the context of optimization problem. The SANN is a variant of SA given in [32], and its pseudo-code is presented in Algorithm 2, adapted from [33]. Nelder–Mead algorithm The [34] simplex method is an algorithm of unconstrained optimization that belongs to a more general class of direct search whose objective is to find the minimum of a function f. Algorithm 3 shows the pseudo-code of Nelder–Mead algorithm [35]. fitDist function Algorithm 4 shows the pseudo-code to the fitDist function. This function is used to obtain parameter estimates as well as their log-likelihood, variance, confidence interval and MSE. getSimulation function Algorithm 5 shows the pseudo-code to the function getSimulation. This is the main routine for generating the simulations of the distributions. Gusmão, F.R.S.d., Gomes-Silva, F., Brito, C.C.R.d. et al. Analyzing and solving the identifiability problem in the exponentiated generalized Weibull distribution. J Egypt Math Soc 29, 21 (2021). https://doi.org/10.1186/s42787-021-00130-x DOI: https://doi.org/10.1186/s42787-021-00130-x Exponentiated Generalized Weibull Identifiability 62F10
CommonCrawl
Is unemployment in young adulthood related to self-rated health later in life? Results from the Northern Swedish cohort Fredrik Norström ORCID: orcid.org/0000-0002-0457-21751, Urban Janlert1 & Anne Hammarström2 Many studies have reported that unemployment has a negative effect on health. However, little is known about the long-term effect for those who become unemployed when they are young adults. Our aim was to examine how unemployment is related to long-term self-rated health among 30 year olds, with an emphasis on how health differs in relation to education level, marital status, previous health, occupation, and gender. In the Northern Swedish Cohort, 1083 teenagers (~16 years old) were originally invited in 1981. Of these, 1001 participated in the follow-up surveys in 1995 and 2007. In our study, we included participants with either self-reported unemployment or activity in the labor force during the previous three years in the 1995 follow-up so long as they had no self-reported unemployment between the follow-up surveys. Labor market status was studied in relation to self-reported health in the 2007 follow-up. Information from the 1995 follow-up for education level, marital status, self-reported health, and occupation were part of the statistical analyses. Analyses were stratified for these variables and for gender. Analyses were performed with logistic regression, G-computation, and a method based on propensity scores. Poor self-rated health in 2007 was reported among 43 of the 98 (44%) unemployed and 159 (30%) of the 522 employed subjects. Unemployment had a long-term negative effect on health (odds ratio with logistic regression 1.74 and absolute difference estimates of 0.11 (G-computation) and 0.10 (propensity score method)). At the group level, the most pronounced effects on health were seen in those with upper secondary school as their highest education level, those who were single, low-level white-collar workers, and women. Even among those becoming unemployed during young adulthood, unemployment is related to a negative long-term health effect. However, the effect varies among different groups of individuals. Increased emphasis on understanding the groups for whom unemployment is most strongly related to ill health is important for future research so that efforts can be put towards those with the biggest need. Still, our results can be used as the basis for deciding which groups should be prioritized for labor-market interventions. It is generally agreed that unemployment is related to poor health [1,2,3]. It has been debated whether unemployment causes poorer health or if poorer health among unemployed individuals can be solely explained by poor health increasing the risk of getting unemployed. The most common view is that unemployment in fact causes poorer health [3], but there are also a few studies arguing against this [4]. The study context has been shown to have a major role in explaining how unemployment is related to poorer health [2], so it is possible, therefore, that both those who argue for and against the causality link might be partly correct. Less is known about the long-term effects of unemployment on health. The few studies that have examined this are well in agreement that unemployment is related to poorer long-term health [5,6,7,8,9] as well as other social adversities such as lower income [10]. A long-term negative health effect from youth unemployment has been shown in studies using the Northern Swedish Cohort [11] in the form of psychological symptoms [6, 8, 9], somatic symptoms [5, 6], and high blood pressure [7] in follow-ups of 16 year olds at 30 and 42 years of age. It is rare for studies to look at long-term follow-ups of health effects from unemployment at a later age. Strandh et al., using data from the Northern Swedish cohort, could not confirm a long-term effect on psychological symptoms from unemployment at the age of 30 years in a 12-year follow-up [8]. Many variables have been included as part of the statistical analysis for studies of the relationship between unemployment and health, the most common being gender, age, education level, marital status, household income, geographic location, and social network/support [2]. In studies of the relationship between unemployment and health, most of these variables are commonly only of interest as potentially confounding variables and are not presented with stratified estimates for each outcome of the factors. Gender, age, and geographic location were actually the only factors where results were reported for each outcome in at least 5 of the 41 studies in a recent review [2]. In previous studies with results presented on the group level, the effect on health from becoming unemployed has usually differed between groups [2]. The groups most disfavored by unemployment often vary depending on when the study was performed and the target population that was studied. Only for manual workers (compared with non-manual workers) [12,13,14], those unemployed due to health reasons (compared with those unemployed for other reasons) [15], and those with a weak social network (compared with those with a strong social network) [16, 17] has a greater risk for poor health been unequivocally demonstrated. However, these conclusions are based on only a few studies, so it can still be questioned whether similar conclusions can be drawn for any context or population. There is a need for gendered and contextualized analyses as studies in the field have shown that unemployment could have various impacts on the health status of men and women [12, 18, 19]. Here, Raewyn Connells theories about how the patterned relations between men and women that form gender as a social structure, could be useful [20]. According to her theory, gender relations are on a structural level integrated into the labor market and in this way, different labor market conditions are constructed for men and women. Also for other groups, such as age and education, stratified results have been inconclusive, and they seem to depend on the study context. For some groups, there have even been results indicating no health effect or even potentially positive health effects from unemployment, e.g. for Spanish women [14] and for Swedes with only a primary-school education [21]. Thus it is not usually possible to draw general conclusions about how unemployment affects different groups [2]. Potentially confounding variables must be considered in the analysis of the effect on health from unemployment, and they need to be handled well in the statistical model. One of the keys to having good estimates of the health effect from unemployment is being able to overcome the problem arising from health selection, which appears due to people who become unemployed being more likely to have previous health problems than those who remain employed. For long-term effects due to unemployment, caution about unemployment during the follow-up period is needed to avoid interpretations related to a more recent unemployment experience. In previous studies this has been handled in the statistical analysis model [5,6,7,8,9], while a novel approach used in our study is to only include those without unemployment in the follow-up period. The health effect from being re-employed has been studied many times [1], but such research has had a different focus than our study, which is on the lasting health effect later in life due to unemployment. Thus, our aim was to examine how unemployment is related to long-term self-rated health among 30 year olds, with an emphasis on how health differs in relation to education level, marital status, previous health, occupation, and gender. Study design and participants The Northern Swedish Cohort was initiated in 1981. In that study, all pupils, most of whom were born in 1965, who were in their last year of compulsory school in a middle-sized town in Northern Sweden were invited to participate. For the cohort, there have been four follow-ups (1983, 1986, 1995, and 2007) [11]. Comprehensive questionnaires were distributed at the initial time of inclusion and at the four follow-ups, and the response rates have been very high, ranging from 1080 (99.7%) of 1083 invited individuals in the baseline investigation to 1010 individuals at the latest follow-up in 2007 (94.3% of those still alive). Further information about the Northern Swedish Cohort is available elsewhere [11]. In our study, survey data from the follow-ups in 1995 (when the participants were ~30 years of age) and 2007 (when the participants were ~42 years old) were used, which were available for 1001 individuals. We restricted our selection of individuals to the 654 participants who were active in the labor market at the follow-up in 1995 and who had no self-reported unemployment in between the follow-ups of 1995 and 2007. We used the inclusion criteria to detect differences in health on a longer time perspective due to unemployment and not due to unemployment spells between the follow-ups of 1995 and 2007. For all of our analyses, we required eligible responses on all of our candidate variables, leading to the exclusion of 34 individuals and resulting in a final selection of 620 individuals. Our final selection of individuals corresponded to 58% of those invited and still alive from the original cohort at the time of the 2007 follow-up. The definitions of "active in the labor market" and "no self-reported unemployment in between follow-ups" are given in the section "Definition of labor market status". A flow chart of the inclusion criteria is shown in Fig. 1. Flow chart of study participants. School leavers (~16 years of age) in a middle-sized town in Northern Sweden were invited in 1981. Follow-up surveys were conducted in 1995 and 2007. Participants were defined as active in the labor market 1995 if they were unemployed or employed. Requirements for being defined as unemployed and employed 1995 and not unemployed during follow-up are given in detail in the methods section Questionnaire data From the 2007 questionnaire, besides using no self-reported unemployment as an inclusion criterion, only our outcome variable, which measures self-rated health through three response alternatives ("good", "fairly good", and "poor") was used. For self-rated health, "fairly good" and "poor" were used to represent poor health, and "good" was used to represent good health. From the 1995 questionnaire, the same question was used and recoded identically, but questions about labor market status, education level, marital status, occupation, social support, cash margin, smoking, alcohol intake, weight, and height were also used. The labor market status variables were chosen to define the exposure to unemployment, while other variables were chosen due to them being listed among the most commonly used variables in studies similar to ours in a recent review [2], which hints that they are potential confounders in the relationship between health and labor market status, and because these variables were collected in the Northern Swedish Cohort [11]. Self-rated health in 1995 allowed the health difference due to current/recent unemployment in 1995 to be taken into account in our analyses. Definition of labor market status For labor market status, unemployment was used as the exposure and compared with employment as the non-exposure. We defined the participants' labor market status based on self-reported labor market status during the last three years using questionnaire responses from 1995. In the questionnaire, a tick for employment status(es) was given for each of the half years between autumn 1986 and autumn 1995 (the time period since the previous follow-up). From the listed employment statuses in the questionnaire, the alternatives "Full-time employment", "Part-time (20–39 hours a week) employment" and "Labor market measure" were defined as "Employed", the alternative "Unemployed" was defined as "Unemployed", and the alternatives "University/high-school", "Other education", "Casual job (< 20 hours a week)", "Sick leave", "On parental leave", and "Other" were defined as not being active in the labor market. A tick as unemployed during any of the six half-year periods between autumn 1992 and autumn 1995 defined the participant as unemployed in our study. Participants not defined as unemployed were defined as employed in our study if they had a tick for any of the "employed" alternatives for at least three half-year periods during the same time period of 1992 to 1995. To be considered to have no unemployment between follow-ups, which was a criterion for being included in our study, participants' responses to employment statuses between spring 1996 and autumn 2007 in the 2007 questionnaire were used. Participants were defined as having no unemployment during this time period if besides no unemployment being reported they also had at least three ticks for alternatives defined as employed during the period between spring 1996 and autumn 2007. Thus, we compared those with unemployment (the exposed group) during the ages of 28–30 years old with those who were employed (the reference group) during these ages, and we only made comparisons between individuals with no unemployment between 30 and 42 years of age. Requiring employment during at least 12 of the 24 time periods would have decreased our sample from 620 to 608 individuals, thus having only a small influence on the results. Potential confounding variables at age 30 Education level was divided into three groups – at most 2 years of secondary education, 3–4 years of secondary education (referred to as "upper secondary education"), and university studies (bachelors degree or completion of other education at higher level) – with at most 2 years of secondary education being the reference group. For marital status, the alternatives "living with wife/husband" and "living with cohabitant/partner" were defined as "married" and were used as the reference group, and the other alternatives of "alone", "with a friend", and "other" were defined as "single" and used as the exposure group. A socio-economic index was derived for each respondent from their description of their occupation based on the nomenclature used by Statistics Sweden [22], with blue-collar workers (codes 11–22 and 89) as the reference group and low-level white-collar workers (codes 31–36) and medium- to high-level white-collar workers (codes 42–79) as the exposure groups. For gender, men were used as the exposure group. For measuring social support, we used the Availability of Social Integration (AVSI) and Availability of Attachment (AVAT) instruments, which are part of the Interview Schedule for Social Interaction [23]. The AVSI consists of four questions with six response alternatives, and the AVAT consists of six questions with four alternatives. In both cases the questions are summed together for a maximum score of 24. For the AVSI, we used a cut-off of 13 or lower as the reference group, and for the AVAT we used a cut-off of 10 or higher as the reference group. For cash margin, the availability of 13,000 SEK (corresponding to 1366 euro on 28 March 2017) within a week was used as the reference group, and no availability of 13,000 SEK within a week was used as the exposure. For smoking, "not a current smoker" was used as the reference group and was compared with i) "smoking at most 10 cigarettes a day" or "smoking pipe or smoking cigar", and ii) "smoking more than 10 cigarettes a day". The total alcohol consumption in centiliters of pure alcohol per year for a study participant was calculated based on six questions, one for frequency and one for the amount of intake on each drinking occasion for each of the alcoholic beverages of beer, wine, and spirits. The frequency questions were almost identical for all three alcoholic beverages, with "never" valued as 0, "every or every second day" as 250, "1–2 times a week" as 80, "a few times a month" as 12, and "more seldom" as 6. The questions for the amount of intake of alcohol on each occasion had 7–9 response alternatives for each beverage. The scores for these responses are presented in Additional file 1: Table S1. For each of the beverages, there were also weights corresponding to the alcohol percentage – 0.05 for beer, 0.14 for wine, and 0.40 for spirits. The total alcohol consumption was calculated as the sum of alcohol intake for each beverage. For each beverage, the alcohol intake during a year was calculated by multiplying the frequency score, the amount score, and the weight. The alcohol intake score has been used in previous studies of the Northern Swedish Cohort and is considered to work well [7]. An alcohol intake score below 140 was used as the reference value for our analyses. Cutoff-values for the AVSI, AVAT, and alcohol intake were chosen to build two groups containing approximately equal numbers of individuals in both. Body mass index (BMI) was derived from self-reported weight and height and calculated as weight/height2. Those with BMI ≥ 30 kg/cm2 were defined as obese and those with BMI between 25 and 30 kg/cm2 were defined as overweight, and the two groups were used as the exposed groups. Logistic regression, G-computation, and a method using propensity scores were used to analyze the effect on health from unemployment. The logistic regression model studies the effect of unemployment on health by comparing groups, and it is the most commonly used method for studies of the relationship between unemployment and health [2]. The other two methods estimate the risk difference using counterfactual arguments. The risk difference is estimated with E[Y(1)] − E[Y(0)], where E[Y(1)] corresponds to the expected effect if all individuals are unemployed and E[Y(0)] corresponds to the expected effect if all individuals are employed. Thus, the advantage with these methods estimators are that they correspond to the marginal effect of becoming unemployed. The procedure for our analysis was to first include all identified potentially confounding variables in a full model. Thereafter we applied a reduced model using only the significant variables in the full model. In the reduced model, we used the same participants as in the full model to allow for comparisons between models, which meant that 15 individuals with complete information for variables in the reduced model were excluded in these analyses. Interactions between variables were not considered in our analyses. We tested models that included and excluded our candidate variables in the reduced and full model, but these only had a limited effect on the estimate of unemployment on health. We therefore included education level, marital status, and occupation in the reduced and full models despite these potentially being collinear variables. Propensity scores were introduced in 1983 by Rosenbaum and Rubin [24], but these have rarely been used to study the effect of unemployment on health [2]. The propensity score is the conditional probability of being assigned to the exposure group based on baseline covariates. This implies that an exposed and unexposed individual with the same propensity score should have had the same probability of being exposed. If the estimates of the propensity score are unbiased, which cannot be assumed, these groups would then correspond to those in a randomized controlled trial (RCT). The bias of the estimates depends on how well the propensity score can balance both measured and unmeasured confounders. The strength of the RCT is that the balance of confounders can well be accomplished due to the randomization if the study protocol is followed, something that an observational study cannot accomplish in the study design. The inverse probability weight (IPW) was defined as \( w=\frac{X}{PS}+\frac{1- X}{1- PS} \), where X refers to the exposure (employed/unemployed), and PS to the estimate of the propensity score. We used an IPW estimator, as suggested by Lunceford and Davidian [25], to estimate the risk difference $$ {RD}_{IPW}=\frac{1}{n}\left[\sum_{i=1}^n{Y}_i\left(\frac{X_i}{PS_i}-\frac{\left(1-{X}_i\right)}{1-{PS}_i}\right)\right], $$ where Y refers to the outcome (self-rated health). The marginal effect from this estimator corresponds to the average treatment effect [26]. In our study, logistic regression, with covariates from the statistical model, was used to estimate the propensity scores, which correspond to the probability of being unemployed for an individual based on his or her characteristics. We calculated the standardized difference for each covariate in the reduced model to assess the balance of covariates between the employed and unemployed group, both with and without a weight [27]. For the unweighted sample the standardized difference was defined as $$ d=100\frac{\left({\widehat{p}}_{unemployed}-{\widehat{p}}_{employed}\right)}{\sqrt{\frac{{\widehat{p}}_{unemployed}\left(1-{\widehat{p}}_{unemployed}\right)+{\widehat{p}}_{employed}\left(1-{\widehat{p}}_{employed}\right)}{2}}}, $$ where the denominator is the pooled standard deviation and \( \widehat{p} \) is the estimated proportion of exposed individuals for the covariate. For categorical variables with three outcomes, two dummy variables were created and the reference group for the covariate was set to 0. For the weighted sample, the pooled standard deviation was calculated with $$ \sqrt{\frac{\sum {w}_i}{{\left(\sum {w}_i\right)}^2-\sum {w}_i^2}\sum {w}_i{\left({x}_i-{\overset{-}{x}}_{w eight}\right)}^2,} $$ where \( {\overset{-}{x}}_{w eight}=\frac{\sum {w}_i{x}_i}{\sum {w}_i} \) is the weighted prevalence for the covariate, and \( \widehat{p} \) was calculated with \( {\overset{-}{x}}_{w eight}=\frac{\sum {w}_i{x}_i}{\sum {w}_i} \), but for employed and unemployed groups separately. G-computation is a similar regression method as logistic regression, but it differs in that it aims to estimate marginal effects [28]. For the G-computation, logistic regression was first performed with all variables in the statistical model, including labor market status. The risk difference was thereafter estimated based on the logistic regression as the difference between the expected effect if all individuals are unemployed and the expected effect if all individuals are employed. Results from our G-computation estimator can be directly compared with those from the IPW estimator. We performed stratified analyses for the variables in the reduced model. We also performed stratified analyses for men and women because it has been shown in several studies that the effect of unemployment on health differs for the two groups [2]. In some cases our stratified analyses used smaller samples than has been recommended for logistic regression [29], which is at least 10 events per variable for the less-common outcomes. Such situations only occurred rarely for the logistic regression when self-rated health in 2007 was used as the outcome variable, but it became more of a problem when labor market status was used as the outcome variable for the estimation of propensity scores. We have highlighted in the results section when this criterion was not fulfilled. R Studio was used for all statistical analyses (R Studio, Boston, MA). The GLM procedure in R was used for logistic regression estimates, and confidence intervals for the estimator were derived with the profile likelihood [30]. The Bootstrap technique with replacement was used to derive estimates of the mean square error for the IPW and G-computation estimators [31]. The 2.5% and 97.5% percentiles of the Bootstrap simulations were used to calculate the 95% confidence intervals. Sensitivity analyses were performed for the exclusion criteria of no unemployment during follow-up. In the first analysis, no individuals were excluded due to unemployment during the follow-up. In the second analysis, a variable was introduced with those defined as having unemployment during the follow-up as the exposed group and those without unemployment as the reference group. Pearson's χ2-test was used to test if the exposure variable (labor market status) was associated with potential confounders. Statistical significance was defined at the 5% level. Of the 620 individuals without self-reported unemployment from 1995 to 2007 who had information for all study variables, 98 (16%) were defined as unemployed in 1995. The characteristics of the study population are presented in Table 1. It is notable that most participants reported good self-rated health in 1995 among both the employed (81%) and unemployed (74%), while the self-reported health for both groups had worsened by 2007, with this being more pronounced among the unemployed. Labor market status had an association with self-rated health in 2007, cash margin, and smoking (Table 1). Table 1 Characteristics for the study population (n = 620) Long-term effect on health from unemployment Our results showed a clear negative health effect from unemployment regardless of which estimator and statistical model we used (Table 2). The crude odds ratios and the adjusted odds ratios for the full and reduced models, and thereby the contribution from potential confounding variables, are presented in Additional file 2. The coefficients for the variables in the logistic regression were similar in both models, with the odds ratio for labor market status differing by only 0.01 units between the models. The differences between estimates for G-computation and IPW were also small between models. The standardized difference ranged from 5.5% to 17% without IPWs and from 0.71% to 2.1% when IPWs were used (Additional file 3: Table S3). Thus, the balance in observed baseline covariates was good for the propensity scores (Austin and Stuart referred to a standardized difference below 10% as being a level some authors considered to be negligible imbalance [27]). Table 2 Long-term effect of unemployment at 28–30 years of age on self-rated health (n = 620) All results from our sensitivity analyses also showed a clear negative and significant health effect from unemployment. The sensitivity analysis without exclusion of the unemployed during the follow-up period gave an odds ratio with logistic regression of 1.85 and an absolute difference estimate of 0.129 with G-computation and 0.123 with IPW. The sensitivity analysis where individuals was grouped based on having had an employment spell or not during the follow-up period (instead of excluding individuals with unemployment spells) resulted in an odds ratio with logistic regression of 1.70 and an absolute difference estimate of 0.112 with G-computation and 0.106 with IPW, which were close to the estimates for the reduced model (odds ratio of 1.74 and absolute difference estimates of 0.113 (G-computation) and 0.103 (IPW)). Long-term effect of unemployment on health in groups of individuals All of our stratifications showed a negative health effect for the unemployed compared to the employed, but not all of these were significant (Table 3). For the stratifications of education level, it was only for those with upper secondary education that there was a significantly poorer long-term health outcome for the unemployed compared with the employed for all estimates, while for both secondary education and university studies, there were significant negative effects only for the G-computation. For singles, there was significantly poorer health for the unemployed compared to the employed for the counterfactual estimates but not for the logistic regression estimate. For those who were married, the reductions in health among the unemployed were non-significant for all estimators. Table 3 Long-term effect of unemployment at 28–30 years of age on self-rated health at age 42 for groups of individuals (n = 620) For both stratifications on self-rated health in 1995, there was significantly poorer health for the unemployed compared to the employed for the G-computation estimate, while this was only the case for those with good self-rated health in 1995 for the logistic regression estimator and for none of the groups with the IPW estimator. However, the logistic regression estimators were very similar numerically, and the difference in sample size is likely to explain why only one of them was significant. For occupation, it was only for medium- and high-level white-collar workers that statistically significant poorer health was observed for the logistic regression estimator, while statistically significant poorer health was seen for all groups for the G-computation estimator and for no groups for the IPW estimator. For the G-computation estimator there was significantly poorer health reported for both unemployed men and women, while the difference was only significant for women for the other two estimators. In our study, we show in a 12-year follow-up of 30 year olds that there is a negative health effect from being unemployed at the age of 30 despite having had steady employment from the ages of 30 to 42 years. This provides evidence for a long-term effect from being unemployed at an older age than has previously been shown. Despite rather small samples (100–300 individuals), it is evident from our stratified results that the effect from unemployment differs between groups of individuals. Most pronounced is the long-term negative health effect for those with upper secondary education, those living alone, medium- and high-level white-collar workers, and women, while there was at least an indication of a negative long-term health effect of unemployment for all other groups. Strandh et al. studied the long-term health effect from unemployment at the age of 30 from the same cohort for psychological symptoms, but could not confirm a long-term negative health effect [8], which is in contrast to our results. Their study used a different health measure, which probably explains the different results. Our findings of a larger negative health effect for women than men is in agreement with two previous studies from the Northern Swedish Cohort [19, 32], as well as with other Swedish studies [21, 33]. Contrary to these studies that had a short-term perspective on the health consequences from unemployment, our study investigated the long-term effect. It is an interesting finding, therefore, that women seem to be more disfavored from being unemployed both in the long- and short-term than men. There are still not any commonly agreed upon reasons as to why Swedish women seem to suffer more from unemployment than men. In applying Connell's theory of gender relations on the Swedish labor market [20], it turns out that it is strongly gender segregated with women working in less paid occupations within the public sector and in worse work-environment [34, 35]. In addition, women work in more unsecure labor market positions and much more often than men they have part-time work. These could potentially be part of the explanation for our findings, and because it is important to find explanations behind the differences between unemployed men and women we recommend that more research is conducted on this topic from a gender theoretical perspective. Few studies have presented results for health effects of unemployment for those with different education levels, and those that have looked at this have shown inconclusive results [2]. In our study, we showed a large negative health effect due to unemployment for those with upper secondary education, while there was a lower effect on health for those with secondary education and university education. Despite estimates with a potentially large bias due to few unemployed with an upper secondary education, this group's health status seems likely to be more affected from unemployment in our study population. Also for occupation, only a few studies have presented stratified results [2]. Our study shows opposite results compared to these studies, indicating more health problems for the unemployed from a high-level occupation class than others. Stratified results for marital status have only been presented in two studies, and they showed no apparent difference between married and single individuals [12, 21]. Our study is, therefore, the first to show results indicating that single individuals might be more affected by unemployment than married individuals. Lack of a strong social network has been shown to be related to poorer health in two previous studies [16, 17], and perhaps our results could be interpreted as there being weaker social structure among single persons and that this has a negative effect on their health in relation to married people if they become unemployed. Differences in results for occupation and marital status between our study and others might be explained by us having a focus on the long-term effect from unemployment, which was not the case for previous studies. Although our study provides new and valuable information, more research is needed to be better informed about how education, occupation, and marital status are related to poor long-term health due to unemployment. A strength with our study is the very low attrition rate (6%). Still, despite a large sample of 1010 participants at the latest follow-up, not all stratified results fulfilled the recommendation for logistic regression of at least 10 events per variable for the less-common outcome [29]. Thus, the stratified results might be unreliable and give biased results, both for these and other estimates where the sample size was only slightly above the recommended threshold. Nevertheless, our results on the group level are valuable from a descriptive point of view, even if they cannot give very strong statistical evidence. We restricted our selection of individuals to those without unemployment during the follow-up because we did not want a prolonged unemployment experience to affect our effect estimates. In our sensitivity analyses, results were similar when a variable for unemployment during follow-up was used, while not excluding individuals with unemployment during the follow-up gave a slightly stronger negative health effect. Thus, this selection criterion is likely to have little bearing on the estimates. A negative long-term health effect due to unemployment might be related to precarious employment during the follow-up period. Of the unemployed with no unemployment during the follow-up in our study, 42 experienced precarious employment during the follow-up with only 15 of these experiencing it during the majority of the follow-up period. To avoid effect estimates that are mainly reflecting effects due to precarious employment, a further limitation of our study sample to only those without precarious employment during the follow-up period was an alternative. However, analysis based on such restriction would also require that precarious employment during 1992–1995 was used to define the employed group and a larger focus on precarious employment which was not the scope of our study. It would also result in a too small sample to have reliable estimates to restrict ourselves in such a way. We therefore considered our definition of the study sample and the labor market status groups to be the most feasible. If there is a non-negligible bias due to precarious employment during the follow-up period for our effect estimates then our conclusions are still likely to be valid, although they would then be indicative of unemployment being related to poor health due to future unstable labor market positions rather than due to the unemployment itself. It has been shown in studies from our study cohort that those with precarious employment has a poorer health than those with a stable employment [36, 37]. However, these studies have not evaluated the long-term consequences from having had a precarious employment, which is an angle that would be valuable to study and could be recommended for future research. The idea with propensity scores is to create two groups in the same manner as an RCT and thus to avoid problems from confounding. For the overall long-term health effect due to unemployment, our results were similar to those from the G-computation that were derived using logistic regression. Similar results are an indication that both methods work well, and are also in line with previous comparisons between propensity score techniques and logistic regression [38], but unmeasured confounders can still cause problems for the estimates. Also for our third method, logistic regression, the conclusions were generally similar. The confidence intervals estimated from the non-parametric Bootstrap, which was used for the G-computation and the IPW estimators, were derived from small samples. The G-computation estimator had small confidence intervals for stratified estimates and might therefore give more significant results than there really are. On the population level, the sample sizes are large enough so the bias for the confidence intervals based on a non-parametric Bootstrap should be very small. Health prior to unemployment might have been an important confounding factor to include in our analyses to limit the bias. We could have used information from earlier questionnaires (1981, 1983, and/or 1986) in the Northern Swedish Cohort. However, these questionnaires did not include self-rated health and were not informative about health close to the unemployment period (1992–1994) of our study. We did use self-rated health at the time of recent/current unemployment in order to explain how health has changed between the two occasions (1995 and 2007). Interestingly, our stratifications on self-rated health in 1995 showed very similar results. This indicates that health status at the time of becoming unemployed at most might play a small role in the long-term health experience from unemployment. This also gives good support for our results being highly reliable and not confounded by previous health. Avoiding unemployment therefore seems valuable from a public health perspective not only for those that suffer from poorer health at the time of the unemployment, but also for those whose health is at most marginally affected at the time of the unemployment. In a previous review, it was shown that only gender, age, and geographical location are commonly reported on the group level despite results on the group level showing that the context matters for the extent to which unemployment leads to poorer health [2]. Our study supports the recommendation from this earlier review that there should be greater focus on results on the group level. In the review, it was also reported that it is common to include many factors in the statistical model for the analysis of the health effect due to unemployment in order to control for confounding effects, as is also the case with other social epidemiological research questions. Still, it is apparent that only a few, if any, of these factors are commonly analyzed on the group level. We therefore support the recommendation to increase the focus on the group level for our and other research questions within the social epidemiological research field. We limited ourselves to self-rated health as the health outcome. It would be valuable to investigate the long-term health effect of unemployment on other health outcomes, e.g. somatic symptoms, especially because the study by Strandh et al. showed different results than ours. In future studies with similar research questions as ours, we propose that the G-computation and IPW estimators should be used more frequently because they present the marginal effect and not the relative effect like logistic regression. An article with a more thorough evaluation of the sensitivity to the model set-up, the impact from the definition of labor market status, and the similarity of estimates from G-computation and propensity scores with IPW is planned from the same material as in this article. These issues are highly important for the interpretation of the causal effect from unemployment on health. Because there is a limited number of studies presenting results for groups of individuals, more research is needed to better understand for whom and to what extent unemployment is related to poor health. Still, studies based on small samples such as ours can provide valuable evidence for policy makers, and such studies from a public health perspective can help guide decisions on which labor market measures to prioritize and to whom they should be directed. It might, for instance, be recommendable to focus more on understanding why women seem to suffer more from unemployment than men and to potentially prioritize efforts that can improve their chances of being re-employed and thereby improving public health. Our results are also important to provide support for future studies that can confirm the relationships observed in this study. The study context has been shown to be highly relevant for research within unemployment and health, but we still think that our study will provide highly relevant information both for other areas within Sweden and in other countries. In comparison to young adults with employment, those who are unemployed suffer from poorer health not only shortly after their unemployment, but also later in life. Our study therefore implies that it is important to lower the unemployment rate during young adulthood from not only an economical, but also from a long-term public health perspective. The health effect due to unemployment varies between groups. For future research, it is important to put more emphasis on identifying groups of individuals for whom unemployment is most related to ill health so that efforts can be put towards the groups with the greatest need. AVAT: Availability of Attachment AVSI: Availability of Social Integration IPW: Inverse probability weight RCT: McKee-Ryan FM, Song ZL, Wanberg CR, Kinicki AJ. Psychological and physical well-being during unemployment: a meta-analytic study. J Appl Psychol. 2005;90(1):53–76. Norström F, Virtanen P, Hammarström A, Gustafsson P, Janlert U. How does unemployment affect self-assessed health? A systematic review focusing on subgroup effects. BMC Public Health. 2014;14(1):1310. Paul KI, Moser K. Unemployment impairs mental health: meta-analyses. J Vocat Behav. 2009;74(3):264–82. Böckerman P, Ilmakunnas P. Unemployment and self-assessed health: evidence from panel data. Health Econ. 2009;18(2):161–79. Brydsten A, Hammarström A, Strandh M, Johansson K. Youth unemployment and functional somatic symptoms in adulthood: results from the northern Swedish cohort. Eur J pub Health. 2015;25(5):796–800. Hammarström A, Janlert U. Early unemployment can contribute to adult health problems: results from a longitudinal study of school leavers. J Epidemiol Community Health. 2002;56(8):624–30. Nygren K, Gong W, Hammarström A. Is hypertension in adult age related to unemployment at a young age? Results from the northern Swedish cohort. Scand J Public Health. 2015;43(1):52–8. Strandh M, Winefield A, Nilsson K, Hammarström A. Unemployment and mental health scarring during the life course. Eur J pub Health. 2014;24(3):440–5. Virtanen P, Hammarström A, Janlert U. Children of boom and recession and the scars to the mental health - a comparative study on the long term effects of youth unemployment. Int J Equity Health. 2016;15:14. Arulampalam W. Is unemployment really scarring? Effects of unemployment experiences on wages. Econ J. 2001;111(475):F585–606. Hammarström A, Janlert U. Cohort profile: the northern Swedish cohort. Int J Epidemiol. 2012;41(6):1545–52. Artazcoz L, Benach J, Borrell C, Cortes I. Unemployment and mental health: understanding the interactions among gender, family roles, and social class. Am J Public Health. 2004;94(1):82–8. Backhans MC, Hemmingsson T. Unemployment and mental health-who is (not) affected? Eur J pub Health. 2012;22(3):429–33. Puig-Barrachina V, Malmusi D, Martinez JM, Benach J. Monitoring social determinants of health inequalities: the impact of unemployment among vulnerable groups. Int J Health Serv. 2011;41(3):459–82. Burgard SA, Brand JE, House JS. Toward a better estimation of the effect of job loss on health. J Health Soc Behav. 2007;48(4):369–84. Kroll LE, Lampert T. Unemployment, social support and health problems results of the GEDA study in Germany, 2009. Dtsch Arztebl Int. 2011;108(4):47–52. Luo J, Qu Z, Rockett I, Zhang X. Employment status and self-rated health in north-western China. Public Health. 2010;124(3):174–9. Hammarström A, Lundman B, Ahlgren C, Wiklund M. Health and masculinities shaped by agency within structures among young unemployed men in a northern Swedish context. PLoS One. 2015;10(5):e0124785. Reine I, Novo M, Hammarström A. Unemployment and ill health - a gender analysis: results from a 14-year follow-up of the northern Swedish cohort. Public Health. 2013;127(3):214–22. Connell R. Gender, health and theory: conceptualizing the issue, in local and world perspective. Soc Sci med. 2012;74(11):1675–83. Åhs A, Westerling R. Self-rated health in relation to employment status during periods of high and of low levels of unemployment. Eur J pub Health. 2006;16(3):294–304. Socioekonomisk indelning (SEI). http://www.scb.se/statistik/_publikationer/OV9999_1982A01_BR_X11%C3%96P8204.pdf. Henderson S, Duncan-Jones P, Byrne DG, Scott R. Measuring social relationships. The interview schedule for social interaction. Psychol med. 1980;10(4):723–34. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70(1):41–55. Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat med. 2004;23(19):2937–60. Brookhart MA, Wyss R, Layton JB, Sturmer T. Propensity score methods for confounding control in Nonexperimental research. Circ-Cardiovasc Qual Outcomes. 2013;6(5):604–11. Austin PC, Stuart EA. Moving towards best practice when using inverse probability of treatment weighting (IPTW) using the propensity score to estimate causal treatment effects in observational studies. Stat med. 2015;34(28):3661–79. Snowden JM, Rose S, Mortimer KM. Implementation of G-computation on a simulated data set: demonstration of a causal inference technique. Am J Epidemiol. 2011;173(7):731–8. Norström F. Poor quality in the reporting and use of statistical methods in public health - the case of unemployment and health. Arch Public Health. 2015;73:56. R Core Team: R: A Language and Environment for Statistical Computing. In.: R Foundation for Statistical Computing; 2015. Davison AC, Hinckley DV. Bootstrap methods and their application. Cambridge, United Kingdom: Cambridge University Press; 1997. Hammarström A, Gustafsson PE, Strandh M, Virtanen P, Janlert U. It's no surprise! Men are not hit more than women by the health consequences of unemployment in the northern Swedish cohort. Scand J Public Health. 2011;39(2):187–93. Roos E, Lahelma E, Saastamoinen P, Elstad JI. The association of employment status and family status with health among women and men in four Nordic countries. Scand J Public Health. 2005;33(4):250–60. Statistics Sweden: Women and men in Sweden-facts and figures 2014; 2014. Theorell T, Hammarström A, Aronsson G, Traskman Bendz L, Grape T, Hogstedt C, et al. A systematic review including meta-analysis of work environment and depressive symptoms. BMC Public Health. 2015;15:738. Waenerlund AK, Gustafsson PE, Virtanen P, Hammarström A. Is the core-periphery labour market structure related to perceived health? Findings of the northern Swedish cohort. BMC Public Health. 2011;11:956. Waenerlund AK, Gustafsson PE, Hammarström A, Virtanen P: History of labour market attachment as a determinant of health status: a 12-year follow-up of the Northern Swedish Cohort. BMJ Open 2014, 4(2). Shah BR, Laupacis A, Hux JE, Austin PC. Propensity score methods gave similar results to traditional regression modeling in observational studies: a systematic review. J Clin Epidemiol. 2005;58(6):550–9. The authors would like to thank all of the participants of the study. The study was funded by the Swedish Research Council for Health, Working Life and Welfare (dnr 2011–0839) and the Swedish Research Council Formas (dnr 259–2012-37). The Swedish Research Council for Health, Working Life and Welfare and the Swedish Research Council Formas had no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. The study was undertaken at the Umeå Centre for Global Health Research at Umeå University. The questionnaires that were used for this study are available at http://www.medfak.umu.se/english/research/research-projects/lulea_cohort_project/?languageId=1. The datasets generated and/or analyzed during the current study are not publicly available because the Swedish Data Protection Act (1998:204) does not permit sensitive data on humans (like in our study) to be freely shared. The datasets are available based on ethical permission from the Regional Ethical board in Umeå, Sweden, from one of the co-authors (Anne Hammarström). The study was designed by FN in collaboration with UJ and AH. FN performed the analyses and the interpretations in collaboration with UJ and AH. FN drafted the paper, and UJ and AH contributed actively. All authors read and approved the final manuscript. The Regional Ethical Board in Umeå, Sweden, approved the study. Participants consented to participate in the study when they returned their questionnaires. Department of Public Health and Clinical Medicine, Epidemiology and Global Health, Umeå University, SE, 901 87, Umeå, Sweden Fredrik Norström & Urban Janlert Department of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden Anne Hammarström Fredrik Norström Urban Janlert Correspondence to Fredrik Norström. The questions and possible responses for amount of alcohol intake on each drinking occasion, with each response alternative translated to a value. (DOCX 12 kb) Unstratified estimates of the odds ratio for variables in the logistic regression models. (DOCX 14 kb) Diagnostics of the inverse probably weights for the reduced model. (DOCX 15 kb) Norström, F., Janlert, U. & Hammarström, A. Is unemployment in young adulthood related to self-rated health later in life? Results from the Northern Swedish cohort. BMC Public Health 17, 529 (2017). https://doi.org/10.1186/s12889-017-4460-z Northern Swedish Cohort Self-rated Health Long-term Negative Health Effect Propensity Score
CommonCrawl
NCERT Solutions for Class 10 Maths Chapter 9 Some Applications of Trigonometry NCERT Solutions for Class 10 M... NCERT Solutions for Class 10 Maths Chapter 9 Some Applications of Trigonometry - Free PDF The NCERT Solutions for Class 10 Maths Chapter 9 Some Applications of Trigonometry gives a detailed explanation of all the questions given in the NCERT textbook. NCERT Solutions help you to score high marks in 10th CBSE board exams as well as increase your confidence level as all the trigonometry related concepts are well-explained in a structured way. The Vedantu expert teachers have prepared the NCERT Solutions Class 10 Maths Chapter 9 some applications of trigonometry free pdfs as per the NCERT syllabus and guidelines given by the CBSE board. NCERT Solution is always beneficial in your exam preparation and revision. Download NCERT Solutions for Class 10 Maths from Vedantu, which are curated by master teachers. Also, you can revise and download Class 10 Science Solutions for Exam 2019-2020, using the updated CBSE textbook solutions provided by us. Access NCERT Solutions for Class 10 Maths Chapter 9 - Some Applications Of Trigonometry Exercise- 9.1 1. A circus artist is climbing a \[20m\] long rope, which is tightly stretched and tied from the top of a vertical pole to the ground. Find the height of the pole, if the angle made by the rope with the ground level is \[\mathbf{30}{}^\circ \]. (Image will be uploaded soon) Ans: By observing the figure, \[AB\] is the pole. In\[\Delta ABC\], $\frac{\text{AB}}{\text{AC}}=\sin {{30}^{{}^\circ }}$ $\Rightarrow \frac{\text{AB}}{20}=\frac{1}{2}$ $\Rightarrow \text{AB}=\frac{20}{2}=10$ Therefore, the height of the pole is$10m$. 2. A tree breaks due to storm and the broken part bends so that the top of the tree touches the ground making an angle \[\mathbf{30}{}^\circ \]with it. The distance between the foot of the tree to the point where the top touches the ground is \[\mathbf{8m}\]. Find the height of the tree. Ans: Let \[AC\]was the original tree. Due to the storm, it was broken into two parts. The broken part \[AB\] is making 30° with the ground. Let $\mathrm{AC}$ be the original tree. Due to the storm, it was broken into two parts. The broken part $\mathrm{A}^{\prime} \mathrm{B}$ is making ${{30}^{{}^\circ }}$ with the ground. In triangle${{\text{A}}^{\prime }}\text{BC}$, $\Rightarrow \frac{BC}{{{A}^{\prime }}C}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{BC}{8}=\frac{1}{\sqrt{3}}$ $\Rightarrow \text{BC}=\left( \frac{8}{\sqrt{3}} \right)\text{m}$ $\Rightarrow \frac{{{\text{A}}^{\prime }}\text{C}}{{{\text{A}}^{\prime }}\text{B}}=\cos 30$ $\Rightarrow \frac{8}{{{A}^{\prime }}B}=\frac{\sqrt{3}}{2}$ $\Rightarrow {{A}^{\prime }}B=\left( \frac{16}{\sqrt{3}} \right)m$ Height of tree $=\mathrm{A}^{\prime} \mathrm{B}+\mathrm{BC}$ $=\left(\frac{16}{\sqrt{3}}+\frac{8}{\sqrt{3}}\right) \mathrm{m}=\frac{24}{\sqrt{3}} \mathrm{~m}$ $=8 \sqrt{3} \mathrm{~m}$ 3. A contractor plans to install two slides for the children to play in a park. For the children below the age of $5$ years, she prefers to have a slide whose top is at a height of\[\mathbf{1}.\mathbf{5m}\], and is inclined at an angle of \[\mathbf{30}{}^\circ \] to the ground, whereas for the elder children she wants to have a steep slide at a height of\[\mathbf{3m}\], and inclined at an angle of \[60{}^\circ \] to the ground. What should be the length of the slide in each case? Ans: It can be observed that $\text{AC}$ and $\text{PR}$ are the slides for younger and elder children respectively. In $\vartriangle \text{ABC}$ $\Rightarrow \frac{\text{AB}}{\text{AC}}=\sin 30$ $\Rightarrow \frac{1.5}{\text{AC}}=\frac{1}{2}$ $\Rightarrow \text{AC}=3~\text{m}$ In $\vartriangle \text{PQR}$, $\Rightarrow \frac{\text{PQ}}{\text{PR}}=\sin {{60}^{{}^\circ }}$ $\Rightarrow \frac{3}{\text{PR}}=\frac{\sqrt{3}}{2}$ $\Rightarrow \text{PR}=\frac{6}{\sqrt{3}}=2\sqrt{3}~\text{m}$ Therefore, the lengths of these slides are $3~\text{m}$ and $2\sqrt{3}~\text{m}$. 4. The angle of elevation of the top of a tower from a point on the ground, which is\[\mathbf{30m}\] away from the foot of the tower is \[\mathbf{30}{}^\circ .\] Find the height of the tower. Ans: Let $\mathrm{AB}$ be the tower and the angle of elevation from point $\mathrm{C}$ (on ground) is $30^{\circ}$ In $\vartriangle \text{ABC}$ , $\Rightarrow \frac{\text{AB}}{\text{BC}}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{\text{AB}}{30}=\frac{1}{\sqrt{3}}$ $\Rightarrow \text{AB}=\frac{30}{\sqrt{3}}=10\sqrt{3}~\text{m}$ Therefore, the height of the tower is $10\sqrt{3}~\text{m}$. 5. A kite is flying at a height of \[\mathbf{60m}\] above the ground. The string attached to the kite is temporarily tied to a point on the ground. The inclination of the string with the ground is \[\mathbf{60}{}^\circ .\] Find the length of the string, assuming that there is no slack in the string. Ans: Let $\text{K}$ be the kite and the string is tied to point $\text{P}$ on the ground. In $\vartriangle \text{KLP}$, $\Rightarrow \frac{\text{KL}}{\text{KP}}=\sin {{60}^{{}^\circ }}$ $\Rightarrow \frac{60}{\text{KP}}=\frac{\sqrt{3}}{2}$ $\Rightarrow \text{KP}=\frac{120}{\sqrt{3}}=40\sqrt{3}~\text{m}$ Hence, the length of the string is $40\sqrt{3}~\text{m}$. 6. A \[\mathbf{1}.\mathbf{5m}\] tall boy is standing at some distance from a \[\mathbf{30m}\] tall building. The angle of elevation from his eyes to the top of the building increases from \[\mathbf{30}{}^\circ \] to \[\mathbf{60}{}^\circ \] as he walks towards the building. Find the distance he walked towards the building. Ans : Let the boy was standing at point S initially. He walked towards the building and reached at point T. $\text{PR}=\text{PQ}-\text{RQ}$ $=(30-1.5)=28.5~=\frac{57}{2}~$ In $\vartriangle \text{PAR}$, $\Rightarrow \frac{\text{PR}}{\text{AR}}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{57}{2\text{AR}}=\frac{1}{\sqrt{3}}$ $\Rightarrow \text{AR}=\left( \frac{57}{2}\sqrt{3} \right)\text{m}$ In $\vartriangle \text{PRB}$, $\Rightarrow \frac{\text{PR}}{\text{BR}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{57}{2B\text{R}}=\sqrt{3}$ $\Rightarrow \text{BR}=\frac{57}{2\sqrt{3}}\text{=}\frac{19\sqrt{3}}{2}m$ By observing the figure, $ST=AB$ $=AR-BR=\left( \frac{57\sqrt{3}}{2}-\frac{19\sqrt{3}}{2} \right)$ $=\left( \frac{38\sqrt{3}}{2} \right)=19\sqrt{3}m$ Hence, he walked $19\sqrt{3}m$ towards the building. 7. From a point on the ground, the angles of elevation of the bottom and the top of a transmission tower fixed at the top of a \[\mathbf{20m}\] high building are \[\mathbf{45}{}^\circ \] and \[\mathbf{60}{}^\circ \] respectively. Find the height of the tower. Ans: Let \[AB\] be the statue, \[BC\] be the pedestal, and \[D\] be the point on the ground from where the elevation angles are to be measured. In $\Delta \text{BCD}$, $\Rightarrow \frac{\text{BC}}{\text{CD}}=\tan {{45}^{{}^\circ }}$ $\Rightarrow \frac{\text{BC}}{\text{CD}}=1$ $\Rightarrow \text{BC}=\text{CD}$ In $\Delta A\text{CD}$, $\Rightarrow \frac{\text{AB}+\text{BC}}{\text{CD}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{\text{AB}+\text{BC}}{\text{CD}}=\sqrt{3}$ $\frac{AB+20}{20}=\sqrt{3}$ $AB=\left( 20\sqrt{3}-20 \right)m$ $=20\left( \sqrt{3}-1 \right)m$ 8. A statue, \[\mathbf{1}.\mathbf{6m}\] tall, stands on a top of pedestal, from a point on the ground, the angle of elevation of the top of statue is \[\mathbf{60}{}^\circ \] and from the same point the angle of elevation of the top of the pedestal is \[\mathbf{45}{}^\circ .\] Find the height of the pedestal. Let AB be the statue, BC be the pedestal, and D be the point on the ground from where the elevation angles are to be measured. $\text{ In }\vartriangle \text{BCD,} $ $ \text{ }\frac{\text{BC}}{\text{CD}}=\tan 45 $ $ \frac{\text{BC}}{\text{CD}}=1 $ $ \text{BC}=\text{CD }$ $ \text{In }\vartriangle \text{ACD, } $ $ \frac{\text{AB}+\text{BC}}{\text{CD}}=\tan {{60}^{{}^\circ }} $ $ \frac{\text{AB}+\text{BC}}{\text{CD}}=\sqrt{3}\text{ } $ $ 1.6+\text{BC}=\text{BC}\sqrt{3}\quad [\text{As}\,\,\text{CD}=\text{BC}]\,\, $ $ \text{BC}(\sqrt{3}-1)=1.6\,\,\, $ $ \text{BC}=\frac{(1.6)(\sqrt{3}+1)}{(\sqrt{3}-1)(\sqrt{3}+1)}\quad [\text{ByRationalization}]$ $ =\frac{1.6(\sqrt{3}+1)}{{{(\sqrt{3})}^{2}}-{{(1)}^{2}}}$ $=\frac{1.6\left( \sqrt{3}+1 \right)}{2}=0.8\left( \sqrt{3}+1 \right)m$ 9. The angle of elevation of the top of a building from the foot of the tower is \[\mathbf{30}{}^\circ \] and the angle of elevation of the top of the tower from the foot of the building is \[\mathbf{60}{}^\circ .\] If the tower is \[\mathbf{50m}\] high, find the height of the building. Ans: Let \[AB\]be the building and \[CD\]be the tower. In $\Delta \text{CDB}$, $\Rightarrow \frac{\text{CD}}{\text{BD}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{50}{\text{BD}}=\sqrt{3}$ $\Rightarrow \text{BD}=\frac{50}{\sqrt{3}}$ In $\Delta ABD$ $\Rightarrow \frac{AB}{BD}=\tan 30{}^\circ $ $\Rightarrow AB=\frac{50}{\sqrt{3}}\left( \frac{1}{\sqrt{3}} \right)=\frac{50}{3}=16\frac{2}{3}$ Therefore, the height of the building is $16\frac{2}{3}$m. 10. Two poles of equal heights are standing opposite each other on either side of the road, which is \[\mathbf{80m}\]wide. From a point between them on the road, the angles of elevation of the top of the poles are \[\mathbf{60}{}^\circ \]and \[\mathbf{30}{}^\circ \]respectively. Find the height of poles and the distance of the point from the poles. Ans: Let \[AB\] and \[CD\] be the poles and \[O\] is the point from where the elevation angles are measured. In $\Delta \mathrm{CDO}$, $\Rightarrow \frac{\text{AB}}{\text{BO}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{\text{AB}}{\text{BO}}=\sqrt{3}$ $\Rightarrow \text{BO}=\frac{\text{AB}}{\sqrt{3}}$ In $\Delta \text{CDO}$, $\Rightarrow \frac{\text{CD}}{\text{DO}}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{\text{CD}}{80-\text{BO}}=\frac{1}{\sqrt{3}}$ $\Rightarrow \text{CD}\sqrt{3}=80-\text{BO}$ $\Rightarrow \text{CD}\sqrt{3}=80-\frac{\text{AB}}{\sqrt{3}}$ $\Rightarrow \text{CD}\sqrt{3}+\frac{\text{AB}}{\sqrt{3}}=80$ Since the poles are of equal heights, $\text{CD}=\text{AB}$ $\Rightarrow \text{CD}\left[ \sqrt{3}+\frac{1}{\sqrt{3}} \right]=80$ $\Rightarrow \text{CD}\left( \frac{3+1}{\sqrt{3}} \right)=80$ $\Rightarrow \text{CD}=20\sqrt{3}~\text{m}$ $\Rightarrow BO=\frac{AB}{\sqrt{3}}=\frac{CD}{\sqrt{3}}=\frac{20\sqrt{3}}{\sqrt{3}}=20m$ $\Rightarrow DO=BD-BO=80-20=60m$ Therefore, the height of poles is $20\sqrt{3}$ and the point is $20m$ and $60m$ far from these poles. 11. A TV tower stands vertically on a bank of a canal. From a point on the other bank directly opposite the tower the angle of elevation of the top of the tower is \[\mathbf{60}{}^\circ .\] From another point \[\mathbf{20m}\]away from this point on the line joining this point to the foot of the tower, the angle of elevation of the top of the tower is \[\mathbf{30}{}^\circ .\] Find the height of the tower and the width of the canal. Ans: In $\Delta \text{ABC}$, $\Rightarrow \dfrac{\text{AB}}{\text{BC}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \dfrac{\text{AB}}{\text{BO}}=\sqrt{3}$ $\Rightarrow \text{BC}=\frac{\text{AB}}{\sqrt{3}}$…. (1) In $\Delta ABD$, $\Rightarrow \dfrac{AB}{\text{BD}}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \dfrac{AB}{BC+CD}=\frac{1}{\sqrt{3}}$ $\Rightarrow \dfrac{AB}{\frac{AB}{\sqrt{3}}+20}=\dfrac{1}{\sqrt{3}}$ $\Rightarrow \dfrac{AB\sqrt{3}}{AB+20\sqrt{3}}=\dfrac{1}{\sqrt{3}}$ $\Rightarrow 3AB=AB+20\sqrt{3}$ $\Rightarrow 2AB=20\sqrt{3}$ $\Rightarrow AB=10\sqrt{3}m$ Substitute $AB=10\sqrt{3}m$ in $\text{BC}=\dfrac{\text{AB}}{\sqrt{3}}$, $\Rightarrow BC=\dfrac{AB}{\sqrt{3}}=\dfrac{10\sqrt{3}}{\sqrt{3}}=10m$ Therefore, the height of the tower is $10\sqrt{3}m$ and the width of the canal is $10m$. 12. From the top of a \[\mathbf{7m}\] high building, the angle of elevation of the top of a cable tower is \[\mathbf{60}{}^\circ \] and the angle of depression of its foot is \[\mathbf{45}{}^\circ .\] Determine the height of the tower. Ans: Let $AB$ be a building and $CD$ be a cable tower. In $\Delta \text{ABD}$, $\Rightarrow \frac{\text{AB}}{\text{BD}}=\tan {{45}^{{}^\circ }}$ $\Rightarrow \frac{7}{\text{BD}}=1$ $\Rightarrow \text{BD=7m}$ In $\Delta \text{ACE}$, $AE=BD=7m$ $\Rightarrow \frac{\text{CE}}{AE}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{\text{CE}}{7}=\sqrt{3}$ $\Rightarrow \text{CE=7}\sqrt{3}\text{m}$ $\Rightarrow CD=CE+ED=\left( 7\sqrt{3}+7 \right)=7\left( \sqrt{3}+1 \right)m$ Therefore, the height of the cable tower is $7\left( \sqrt{3}+1 \right)m$. 13. As observed from the top of a \[\mathbf{75m}\]high lighthouse from the sea-level, the angles of depression of two ships are \[\mathbf{30}{}^\circ \]and \[\mathbf{45}{}^\circ .\]If one ship is exactly behind the other on the same side of the lighthouse, find the distance between the two ships. Ans: Let $AB$be the lighthouse and the two ships be at point $C$ and \[D\]respectively. In $\Delta \text{ABC}$, $\Rightarrow \frac{75}{\text{BC}}=1$ $\Rightarrow \text{BC=75m}$ $\Rightarrow \frac{AB}{BD}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{75}{BC+CD}=\frac{1}{\sqrt{3}}$ $\Rightarrow \frac{75}{75+CD}=\frac{1}{\sqrt{3}}$ $\Rightarrow 75\sqrt{3}=75+CD$ $\Rightarrow 75\left( \sqrt{3}-1 \right)m=CD$ Therefore, the distance between the two ships is $75\left( \sqrt{3}-1 \right)m.$ 14. A \[\mathbf{1}.\mathbf{2m}\] tall girl spots a balloon moving with the wind in a horizontal line at a height of \[\mathbf{88}.\mathbf{2m}\] from the ground. The angle of elevation of the balloon from the eyes of the girl at any instant is \[\mathbf{60}{}^\circ .\] After some time, the angle of elevation reduces to \[\mathbf{30}{}^\circ .\] Find the distance travelled by the balloon during the interval. Ans: Let the initial position $A$ of balloon change to $B$ after some time and \[CD\] be the girl. In $\Delta \text{ACE}$, $\Rightarrow \frac{\text{AE}}{\text{CE}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{\text{AF-EF}}{\text{CE}}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{\text{88}\text{.2-1}\text{.2}}{\text{CE}}=\sqrt{3}$ $\Rightarrow \frac{\text{87}}{\text{CE}}=\sqrt{3}$ $\Rightarrow CE=\frac{87}{\sqrt{3}}=29\sqrt{3}m$ In $\Delta B\text{CG}$, $\Rightarrow \frac{\text{BG}}{\text{CG}}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{\text{BH-GH}}{\text{CG}}=\frac{1}{\sqrt{3}}$ $\Rightarrow \frac{\text{88}\text{.2-1}\text{.2}}{\text{CG}}=\frac{1}{\sqrt{3}}$ $\Rightarrow \text{87}\sqrt{3}=CG$ Distance travelled by balloon$=EG=CG-CE$ $ =\left( 87\sqrt{3}-29\sqrt{3} \right) $ $ =58\sqrt{3}m $ 15. A straight highway leads to the foot of a tower. A man standing at the top of the tower observes a car at an angle of depression of \[\mathbf{30}{}^\circ ,\] which is approaching the foot of the tower with a uniform speed. Six seconds later, the angle of depression of the car is found to be \[\mathbf{60}{}^\circ .\] Find the time taken by the car to reach the foot of the tower from this point. Ans: Let \[AB\] be the tower. Initial position of the car is \[C\], which changes to \[D\] after six seconds. In $\Delta \text{ADB}$, $\Rightarrow \frac{\text{AB}}{CB}=\tan {{60}^{{}^\circ }}$ $\Rightarrow \frac{\text{AB}}{DB}=\sqrt{3}$ $\Rightarrow DB=\frac{AB}{\sqrt{3}}$ $\Rightarrow \frac{\text{AB}}{BC}=\tan {{30}^{{}^\circ }}$ $\Rightarrow \frac{\text{AB}}{BD+DC}=\frac{1}{\sqrt{3}}$ $\Rightarrow AB\sqrt{3}=BD+DC$ $\Rightarrow AB\sqrt{3}=\frac{AB}{\sqrt{3}}+DC$ $\Rightarrow DC=AB\sqrt{3}-\frac{AB}{\sqrt{3}}=AB\left( \sqrt{3}-\frac{1}{\sqrt{3}} \right)=\frac{2AB}{\sqrt{3}}$ Time taken by the car to travel distance DC $\left( i.e\frac{2AB}{\sqrt{3}} \right)=6$ seconds. Time taken by the car to travel distance DB $\left( ie.,\frac{AB}{\sqrt{3}} \right)=\frac{6}{\frac{2AB}{\sqrt{3}}}\left( \frac{AB}{\sqrt{3}} \right)=\frac{6}{2}=3$ seconds. 16. The angles of elevation of the top of a tower from two points at a distance of \[4m\]and \[9m\]from the base of the tower and in the same straight line with it are complementary. Prove that the height of the tower is\[6m\]. Ans: Let \[AQ\]be the tower and \[R,S\]are the points \[4m,\text{ }9m\]away from the base of the tower respectively. The angles are complementary. Therefore, if one angle is\[\theta \], the other will be \[90\text{ }-\text{ }\theta \] In $\Delta \text{AQR}$, $\Rightarrow \frac{\text{AQ}}{QR}=\tan \theta $ $\Rightarrow \frac{\text{AQ}}{4}=\tan \theta $ …. (i) In $\Delta \text{AQS}$, $\Rightarrow \frac{\text{AQ}}{SQ}=\tan (90-\theta )$ $\Rightarrow \frac{AQ}{9}=\cot \theta $ ….. (ii) On multiplying equations (i) and (ii), we obtain $\Rightarrow \left( \frac{AQ}{4} \right)\left( \frac{AQ}{9} \right)=\left( \tan \theta \right)\left( \cot \theta \right)$ $\Rightarrow \frac{A{{Q}^{2}}}{36}=1$ $\Rightarrow A{{Q}^{2}}=36$ $\Rightarrow AQ=\sqrt{36}=\pm 6$ However, height cannot be negative. Therefore, the height of the tower is $6m$. NCERT Solutions for Class 10 Maths Chapter 9 Some Applications of Trigonometry - PDF Download You can opt for Chapter 9 - Some Applications of Trigonometry NCERT Solutions for Class 10 Maths PDF for Upcoming Exams and also You can Find the Solutions of All the Maths Chapters below. Chapter 1 - Real Numbers Chapter 3 - Pair of Linear Equations in Two Variables Chapter 4 - Quadratic Equations Chapter 5 - Arithmetic Progressions Chapter 6 - Triangles Chapter 9 - Some Applications of Trigonometry Chapter 10 - Circles Chapter 11 - Constructions Chapter 12 - Areas Related to Circles Chapter 15 - Probability About the Chapter In the Class 10 Maths Chapter 9, you will study about different ways in which trigonometry is used to find the height and distance of different objects without actually measuring them. This chapter is divided into 3 sections and one exercise. The first section is the basic introduction of trigonometry in which you will learn, how the need for the trigonometry arose and its application in different fields. The second section includes an introduction to height and distance, important terms related to the height and distance, conditions where the trigonometry concepts are used along with the examples, and last but not the least one exercise at the end. The questions asked in the exercise are based on the basic concepts of trigonometry and its application. The third section includes a summary of the chapter where some important terms given in the chapter are discussed. Some Applications of Trigonometry List of topics and exercise covered in Class 10 Chapter 9 Some applications of trigonometry. Section 9.1: Introduction to some applications of trigonometry Section 9.2: Height and Distance Exercise 9.1: Questions related to some applications of trigonometry. This exercise included 16 questions. Section 9.3: Summary of the chapter In the ch 9 Maths class 10, we will be learning about trigonometry, some application of trigonometry, and the entire summary of the chapter. What is Trigonometry? Trigonometry is one of the most historical subjects studied by different scholars throughout the world. As you have read in Chapter 8 that trigonometry was introduced because its requirement arose to astronomy. Since then trigonometry is used to calculate the distance from the Earth to the stars and the planets. The most important use of trigonometry is to find out the height of the highest mountain in the world i.e. Mount Everest which is named after Sir George Everest. It is also widely used in Geography and navigation. The knowledge of trigonometry enables us to construct maps, evaluate the position of an island concerning the longitudes and latitudes. Historical Facts. Let us Turn to the History of Trigonometry The trigonometry was used by surveyors for centuries. One of the notable and the largest surveying projects of the nineteenth century was the " Great Trigonometric Survey" of British India for which the two largest theodolites were constructed. The highest mountain in the world was discovered during the survey in 1852. From a distance of over 160 km, this peak was seen from 6 distinct stations. This peak was named after Sir George Everest who had first used the theodolites. Theodolites are now exhibited in the museum of the surveys of Dehradun. Surveying instrument, which is used for measuring angles with a rotating telescope In this topic, you will study about the line of sight, angle of elevation, horizontal level, and angle of depression. All these terms are explained in a detailed form along with some solved examples based on it. These solved examples based on the terms line of sight, angle of elevation and angle of depression will help you to understand the concepts thoroughly. How to Calculate Height and Distance? Trigonometric ratios are used to find out the height and the distance of the object. For example: In figure 1, you can see a boy looking at the top of the lampost. AB is considered as the horizontal level. This level is stated as the line parallel to the ground passing through the viewer's eyes. AC is considered as the line of sight. ∠A is known as the angle of elevation. Similarly, in figure 2, you can see PQ is the line of sight, PR is the horizontal level and ∠P is known as the angle of elevation. An inclinometer or Clinometer is a device usually used for measuring the angle of elevation and the angle of depression. Let us recall some trigonometric ratios which help to solve the questions based on class 10 maths Chapter 9. Trigonometry Ratios The ratio of the sides of a right-angle triangle in terms of any of its acute angle triangle is known as the trigonometric ratio of that specific angle. In terms of ∠C, the ratio of trigonometry are given as: Sine - The sine of an angle is stated as the ratio of the opposite side ( perpendicular side) to that angle to the hypotenuse side. Hence, Sine C = Opposite side/Hypotenuse side Cosine- The cosine of an angle is stated as the ratio of the adjacent side to that angle to the hypotenuse side. Hence, Cosine C = Adjacent side /Hypotenuse side Tangent - The tan of an angle is stated as the ratio of the opposite side (perpendicular side) to that angle to the side adjacent to that angle. Hence, Tan C = Opposite side/Adjacent side Cosecant- It is the reciprocal of sine. Hence, Cosec C = Hypotenuse side/Opposite side Secant- It is the reciprocal of cosine. Hence, Sec C= Hypotenuse side/Adjacent side Cotangent- It is the reciprocal of tangent. Hence, Cot C = Adjacent side/Opposite side The following trigonometry ratio table is used to calculate the questions based on applications of trigonometry class 10 NCERT solutions. Trigonometric Ratio Table ∠C Sin C Cos C Tan C Cosc C Sec C Cot C Now, you must have understood all the important topics and terms covered in each section of class 10 maths chapter 9. Perfect understanding of NCERT class 10 chapter 9 Introduction helps you to focus on some points such as the weightage of the chapter, important questions that can be asked in the examination, types of questions that can be appeared in your, etc. This will help you to solve the exam more confidently and also ensures you that you can finish your exam within a time-duration. As, there is a proverb that says "Practice makes the men perfect". It tells us the importance of practicing continuously in any subject to learn anything. Continuous practice is a must to learn any of the subjects. Practicing class 10 maths Chapter 9 NCERT solutions designed by Vedantu experts will bring accuracy and confidence in you as they are designed according to the caliber of the students. It helps you to increase the speed of solving your problems and also bring more accuracy in you. With practicing NCERT questions more and more, you will be aware of the types of questions that can be asked in the examination. This will help you to solve your exam paper more confidently. Practicing not merely enhances your conceptual understanding but also enhances your logical reasoning. Most of the time the questions asked in the examination are repeated and solving the previous questions helps you to solve the questions speedily and accurately in the exam. By following the above-mentioned points, you can surely score above 90% in your board exam. Important features of the NCERT Solutions for Class 10 Maths Chapter 9- Some Applications of Trigonometry The solutions are designed by the subject experts of Vedantu. Chapter-wise questions and solutions are easily accessible. Special guidance for the students preparing for their board examinations. Exercise questions are easily accessible. The solutions are well-explained in the comprehensive method. How Vedantu NCERT Solutions for class 10 maths Chapter 9 -Some applications of trigonometry will help you to score good marks in your board exams? NCERT Solution for class 10 plays an important role in shaping the future of the students as the grades which the students will score will shape the future of the student. The NCERT solution prepared by the professionals of Vedantu is a one-stop solution for all your queries related to class 10 maths chapter 9. Detailed explanation and stepwise solutions for each question prepared by the experts will help you to understand the concept in a better way. The NCERT solutions prepared by the experts of Vedantu provides excellent material for the student to practice and make the learning process more effective. The importance of the Vedantu's NCERT Solutions for Class 10 Maths Chapter 9-Some applications of trigonometry are: Solutions are framed keeping in mind the age of the students. The content of the topic is pointed, brief, and straightforward. Complex questions are divided into small parts and well-explained to save the students from taking the unnecessary strain. Every question is explained with the relevant image to understand the question precisely. The solutions are designed under the latest syllabus and CBSE guidelines. Vedantu experts tried their level best to provide your NCERT Solutions for Class 10 Maths Chapter 9. The aim to provide the solution is to help the students to solve each question given in the board exams in no time. Why are Some Applications of Trigonometry Important? Class 10 Chapter 9 some application of trigonometry is an important topic to discuss as it tells how trigonometry is used to find the height and distance of different objects such as the height of the building, the distance between the Earth and Planet and Stars, the height of the highest mountain Mount Everest, etc. To solve the questions based on some applications of trigonometry class 10, it is necessary to remember trigonometry formulas, trigonometric relations, and values of some trigonometric angles. The following are the concepts covered in the 'height and distance' Some applications of trigonometry. To measure the height of big towers or big mountains To determine the distance of the shore from the sea. To find out the distance between two celestial bodies. This chapter has a weightage of 12 marks in class 10 Maths Cbse (board) exams. One question can be expected from this chapter. Class 10 Maths CBSE paper is divided into 4 parts and each question comes with different marks. The questions will be allocated with 1 mark, 2 marks, 3 marks or 4 marks. Discussion about the sections, exercise, and type of questions given in the exercise. The important topic " Height and Distance" covered in Some applications of trigonometry class 10 is followed by one exercise with 16 questions. The exercise aims to test your knowledge and how deeply you understood each formula and concept of the topic. The numerical questions given in this chapter are based on some applications of trigonometry. To make you understand the topic and related concept, solved numerical problems are also given. Stepwise solutions are given for each of the solved examples. It will help you to understand which concept and formula will be used to solve the given questions accurately. Section 9.1 - Introduction This section gives an introduction to some applications of trigonometry. It tells you how trigonometry is used by different scholars throughout the world and its uses in different fields. It also tells you the way trigonometry is used to find the height and distance of different objects without actually measuring them. In this section, some important terms such as a line of sight, horizontal level, angle of elevation, and angle of depression are discussed. All these important terms are discussed along with the solved examples based on them which will clear your concepts thoroughly and also helps you to solve the questions given in the exercise. Exercise 9.1: "Height and Distance" This exercise includes a total of 16 questions. Each question asked in the exercise are based on the concept of "Height and Distance". Description of the Questions Asked in Exercise 9.1 Question No. Given Information To calculate The angle of elevation and the length of the rope are given We have to calculate the height of the tower. The distance of the object and angle of elevation are given We have to calculate the height of the tree The angle of elevation and height of the two slides are given We need to calculate the length of the slide Height of the object and the distance of the object are given The angle of depression and height of the observer from the ground are given We have to calculate the distance between two objects The angle of elevation from the ground to the bottom of the tower and angle of elevation from the ground to the top of the tower are given Length of the statue, angle of elevation to the top of the statue and angle of elevation to the top of the pedestal are given We have to calculate the height of the pedestal The angle of elevation of the top of the building from the foot of the tower, Angle of elevation of the top of the tower from the foot of the building and height of the tower are given We have to calculate the height of the building Angles of elevations of the top of the two towers and distance between the two poles are given We have to calculate the height of the tower and the distance of the point from the poles. One angle of elevation from the bank of the river and another angle of elevation 20m away from the bank of the river are given To calculate: Height of the tower, width of the canal The angle of elevation, angle of depression and the length of the top of the building are given We have to calculate the height of the tower The angle of depression of two ships and the height of lighthouse from the sea level is given We have to calculate the distance between two ships The angle of elevation from one point to the top of the tower and angle of elevation from another point to the top of the tower are given We have to calculate the height of the tower and width of the canal A man first observe the car at an angle of depression of 30° After 6 seconds, a man again observe the car at an angle of depression of 60°. We have to calculate the time taken by the car to reach the foot of the tower Angles of elevation from one point and angle of elevation from another point are complementary and also the distance between two points from where the angle of elevation is formed is 4 m and 9 m. To prove: Height of the tower 6 m Section 9.3: Summary The summary at the end of the chapter details a brief explanation of all the topics you covered in this chapter. Important Terms to Remember in Height and Distance Line of Sight - It is a line that is drawn from the eye of an observer to the point on the object viewed by the observer. The Angle of Elevation - It is defined as an angle that is formed between the horizontal line and line of sight. If the line of sight lies upward from the horizontal line, then the angle formed will be termed as an angle of elevation. Let us take another situation when a boy is standing on the ground and he is looking at the object from the top of the building. The line joining the eye of the man with the top of the building is known as the line of sight and the angle drawn by the line of sight with the horizontal line is known as angle of elevation. In the above figure line of sight is forming an angle θ through the horizontal line.This angle is known as the angle of elevation. The Angle of Depression - It is defined as an angle drawn between the horizontal line and line of sight. If the line of sight lies downward from the horizontal line, then the angle formed will be termed as an angle of depression. Let us take a situation when a boy is standing at some height concerning the object he is looking at. In this case, the line joining the eye of the man with the bottom of the building is known as the line of sight and the angle drawn by the line of sight with the horizontal line is known as angle of depression. In the above figure angle, θ is considered as the angle of depression Note: Angle of elevation is always equal to the angle of depression The important Point to Remember The distance of the object is also considered as the base of the right angle triangle drawn through the height of the object and the line of sight. The length of the horizontal level is also known as the distance of the object it forms the base of the triangle. Line of sight is considered as a hypotenuse of the right-angle triangle. Hypotenuse side is calculated using Pythagorean Theorem if the height and distance of the object are given. Benefits of NCERT Solutions for Class 10 Maths The benefits of the NCERT Solutions for Class 10 Maths Chapter 9 Some applications of Trigonometry are as follows. According to the guidelines set by CBSE, all answers are written. The NCERT Solutions PDF file is available for free download. The content, which is short and self-explanatory, and well organized. To enhance the understanding of the topic, some of the answers include relevant infographics and images. During the revision of the test, it is useful and acts as a note. For you to solve the maximum questions and to get the chapter's command, solutions are kept simple. In chapter Linear Equations in two variables, these NCERT Solutions are written specifically so that students do not face any difficulties while solving any questions. Our experts have made sure that it is very easy to grasp these NCERT Solutions. In the form of solutions, it covers the whole syllabus and theory. Other study materials are available on the Vedantu app and website, in addition to NCERT Solutions. Chapter-wise solutions for different subjects are also available for the convenience of students. Download the Vedantu app right now and use the study material on the go. 1. How downloading "NCERT Solutions for Class 10 Maths Chapter 9 Some Applications of Trigonometry" PDF from Vedantu can help you to score better grades in exams? Vedantu executed well-designed and reviewed applications of trigonometry class 10 ncert solutions which include the latest guidelines as recommended by CBSE board. The solutions are easily accessible from the Vedantiu online learning portal. It can be easily navigated as Vedantu retains a user-friendly network. The detailed solution of all the questions asked in the NCERT book will help you to understand the basic concepts of some applications of trigonometry class 10 thoroughly. Along with the NCERT solutions, the experts of Vedantu also provide various tips and techniques to solve the questions in the exam. Sign in to Vedantu and get access to the ncert solutions for class 10 maths chapter 9 some applications of trigonometry -Free pdf and you can download the updated solution of all the exercises given in the chapter. 2. What is the importance of trigonometry applications in real-life? The application of trigonometry may not be directly used in solving practical issues but used in distinct fields. For example, trigonometry is used in developing computer music as you must be aware of the fact that sound travels in the form of waves and this wave pattern using sine and cosine functions helps to develop computer music. The following are some of the applications where the concepts of trigonometry and its functions are applicable. It is used to measure the height and distance of a building or a mountain It is used in the Aviation sector It is used in criminology It is used in Navigation It is used in creating maps It is used in satellite systems The basic trigonometric functions such as sine and cosine are used to determine the sound and light waves. It is used in oceanography to formulate the height of waves and tides in the ocean. 3. The distance from where the building can be viewed is 90ft from its base and the angle of elevation to the top of the building is 35°. Calculate the height of the building. Given: Distance from where the building can be viewed is 90ft from its base and angle of elevation to the top of the building is stated as 35° To calculate the height of the building, we will use the following trigonometry formula Tan 35° = Opposite Side/ Adjacent Side Tan 35° = H/90 H = 90 x Tan 35° H = 90 x 0.7002 H= 63.018 feet Hence, the height of the building is 63.018 feet. 4. From a 60 meter high tower, the angle of depression of the top and bottom of a house are 'a and b' respectively. Calculate the value of x, if the height of the house is [60 sin (β − α)]/ x. H = d tan β and H- h = d tan α 60/60-h = tan β – tan α h = [60 tan α – 60 tan β] / [tan β] h= [60 sin (β – α)/ [(cos α cos β)] [(sin α sin β)] x = cos α cos β 5. The top of the two different towers of height x and y, standing on the ground level subtended angle of 30° and 60° respectively at the center of the line joining their feet. Calculate x: y. In ΔABE, x/a = Tan 30° x/a = 1/3 x = a/3 In ΔCDE y/a = Tan 60° y/a = 3 → y = a x 3 x/a ÷ y/a = a/3÷ a3 x/a × y/a = a/3 × 1/a×3 x/y = 1/3 6. What are the main points to study in Class 10 Maths NCERT Chapter 9? The main points to study in Class 10 Maths NCERT Chapter 9 include: Fundamental Basics Of Trigonometry Applications The History Behind Trigonometry Concepts On Height And Distance The Study About The Line Of Sight Angle Of Elevation, Horizontal Level Angle Of Depression The Calculation Methods For Heights And Distance The Trigonometry Ratios And Angle Tables For The Ratios A clear explanation for the solutions of the same can be found on the Vedantu website. 7. How many exercises are there in Chapter 9 of Class 10 Maths? Chapter 9 of Class 10 Maths consists of one exercise that is Exercise 9.1 at the end of this chapter. This section of the chapter has 16 questions that are covered in the first two parts. The questions asked in the exercise are based on the basic concepts of trigonometry and its application, height distance etc. Following this, is a summary of the chapter. 8. How can I score full marks in Class 10 Chapter 9 Maths? Maths is all about practice. If you practice your exercises regularly you will gradually lean towards perfection and accuracy. It will also increase your speed and hence you will be able to finish your paper on time, attempting all the questions, thereby scoring full marks in this subject. For this, you can refer to the Vedantu website or download the Vedantu app which will provide you with the best solutions at free of cost and help you fulfil your aim of scoring top grades in Maths. 9. Where can I get the best solutions for Maths chapter 9? You can find the best solutions for NCERT Class 10 Maths Chapter 9 on Vedantu. Here, you will find the best possible trigonometry explanations and their solutions for respective exercises. These are provided in PDF format, which is totally free to download and you can study from it even in offline mode. For this: Visit the page NCERT Solutions for Class 10 Maths Chapter 9. Click on the download PDF option. Once you're redirected to a page you will be able to download it. 10. What is the advantage of using Vedantu? Vedantu outshines all other learning apps and websites because it has the best team of experts who analyse and form the best answers for the students to learn and practice from. This has a complete chapter-wise explanation and their respective solutions. This also has options for parents to take weekly homely tests from Vedantu so that they can keep a check on their wards.
CommonCrawl
Interpretation of Fourier Transform I know that the idea of the Fourier Transform is to break a function into a sum of trigonometric functions. Consider the following function: $$ f_{\alpha}(t) = e^{-\alpha|t|}$$ The Fourier Transform of this is $$\tilde{f}_{\alpha}(\omega) = \frac{1}{2 \pi} \int_{-\infty}^{\infty} e^{-i \omega t} e^{-\alpha |t|} \ dt $$ $$ = \frac{\alpha}{\pi(\alpha^{2}+\omega^{2})}$$ What precisely does this mean? How does did relate to the question of breaking a function into a sum of trigonometric functions? fourier-analysis leroyleroy $\begingroup$ A Fourier transform on the circle (or some interval) breaks a function up into trigonometric pieces. It's a little harder to interpret the Fourier transform on the line, which you have above. $\endgroup$ – Potato Jun 24 '13 at 3:18 $\begingroup$ It's best to think of $\mathcal{F}f(\omega)$ as being analogous to the coefficients of the Fourier series expansion. In fact, when deriving the Fourier transform from the Fourier series (when making appropriate limits), that is exactly what role it plays. So you can (VERY) loosely think of $\frac{\alpha}{\pi(\alpha^2+\omega^2)}$ as being the amplitude of the function with Fourier component $e^{i\omega t}$. But @Potato is right. It's a little harder to interpret. Even with what I said above, your indexing set is no longer integers but the real numbers so it gets to be a little opaque. $\endgroup$ – Cameron Williams Jun 24 '13 at 3:19 $\begingroup$ Just to add the wikipedia entry's nice animation (basically says the same thing with Cameron Williams): upload.wikimedia.org/wikipedia/commons/7/72/… $\endgroup$ – Shuhao Cao Jun 24 '13 at 3:35 $\begingroup$ @ShuhaoCao that is a really neat animation, I have to say. I'll be using that in the future :) $\endgroup$ – Cameron Williams Jun 24 '13 at 3:47 $\begingroup$ I'd suggest asking engineers that deal with DSP rather than mathematicians. The latter will undoubtedly complicate things for you on this one. $\endgroup$ – AnonSubmitter85 Jun 24 '13 at 3:48 I had this same question when learning about the Fourier transform on the real line. The Fourier transform on the circle (or some interval) is very clearly motivated. It breaks a function up into periodic pieces so it's easier to handle. But it's hard to see what the Fourier transform on the real line is doing. The only thing I could find that fully answered my questions was this article by Terrance Tao. It's excellent. PotatoPotato The value of the Fourier transform at a given frequency (i.e., $\omega$) is simply the contribution of that frequency to the signal. If you consider something basic, say a sine wave (which has just a single frequency), then $\tilde{f}(\omega)$ will be zero everywhere except where $\omega$ is equal to the frequency of the sine wave, at which point $\tilde{f}(\omega)$ will have a magnitude equal to the amplitude of the wave and a phase equal to its argument at $t=0$. The signal in your example, however, is made up of a continuum of frequencies and $\tilde{f}(\omega)$ just tells you the contribution of each. AnonSubmitter85AnonSubmitter85 I brought an extensive answer to a very similar question over here and hope that it would halp you. The function there is targeting a discrete inverse Fourier but is general enough to also cover your question that is quite generic. How to interpret Fourier Transform result? al-Hwarizmial-Hwarizmi The idea of the Fourier series is that every periodic function can be decomposed into an infinite series of sines and cosines. Fourier transform is generalization of this result.Any function $F$ (satisfying some conditions) can be written in the form $$F(x)= \int_{- \infty}^{\infty}\bar F(t)e^{2\pi f i x}df$$ You can interpret this above integral as "decomposing" the function $F$ in terms of $e^{2 \pi f i x}$. Here $\bar F$ is the fourier transform of $f$.So, the value of $\bar F$ at $f$ gives the "contribution" of $e^{2 \pi f i x}$ in this integral. So, Fourier transform is analogous to Fourier coefficients in a Fourier series. MohanMohan Not the answer you're looking for? Browse other questions tagged fourier-analysis or ask your own question. Finding the Fourier transform of tf''(t) Compare Fourier and Laplace transform Difference between Fourier integral and Fourier transform Fourier Transform of $\delta(t-nt)$ Relation between Laplace and Fourier transform Finding Fourier Sine Transform Fourier transform and inverse Fourier transform.
CommonCrawl
Session 7-F UAV II Distributed Collaborative 3D-Deployment of UAV Base Stations for On-Demand Coverage Tatsuaki Kimura and Masaki Ogura (Osaka University, Japan) Use of unmanned aerial vehicles (UAVs) as flying base stations (BSs) has been gaining significant attention because they can provide connections to ground users efficiently during temporary events (e.g., sports events) owing to their flexible 3D-mobility. However, the complicated air-to-ground channel characteristics and interference among UAVs hinder the dynamic optimization of 3D-deployment of UAVs for spatially and temporally varying users. In this paper, we propose a novel distributed 3D-deployment method for UAV-BSs in a downlink millimeter-wave network for on-demand coverage. Our method consists mainly of two parts: sensing-aided crowd density estimation part; and distributed push-sum algorithm part. Since it is unrealistic to obtain all the specific positions users, the first part estimates the user density based on partial information obtained from on-ground sensors that can detect ground users around them. With the second part, each UAV dynamically updates its 3D-position by collaborating with its neighbors so that the total coverage of users is maximized. By employing a distributed push-sum protocol framework, we also prove the convergence of our algorithm. Simulation results demonstrate that our method can improve the coverage with a limited number of sensors and is applicable to a dynamic network. Looking Before Crossing: An Optimal Algorithm to Minimize UAV Energy by Speed Scheduling with A Practical Flight Energy Model Feng Shan, Luo Junzhou, Runqun Xiong, Wenjia Wu and Jiashuo Li (Southeast University, China) Unmanned aerial vehicles (UAVs) is being widely used in wireless communication, e.g., data collection from ground nodes (GNs), and energy is critical. Existing works combine speed scheduling with trajectory design for UAVs, which is complicated to be optimally solved and lose trace of the fundamental nature of speed scheduling. We focus on speed scheduling by considering straight line flights, having applications in monitoring power transmission lines, roads, pipes or rivers/coasts. By real-world flight tests, we disclose a speed-related flight energy consumption model, distinct from typical distance-related or duration-related models. Based on such practical energy model, we develop the 'look before cross' (LBC) algorithm: on the time-distance diagram, we construct rooms representing GNs, and the goal is to design a room crossing walking trajectory, uniquely mapping to a speed scheduling. Such trajectory is determined by looking before crossing rooms. It is proved to be optimal for the offline scenario, in which information about GNs is available before scheduling. For the online scenario, we proposed a heuristic based on LBC. Simulation shows it performs close to the optimal offline solution. Our study on the speed scheduling and practical flight energy model shed light on a new direction on UAV aided wireless communication. SwarmControl: An Automated Distributed Control Framework for Self-Optimizing Drone Networks Lorenzo Bertizzolo and Salvatore D'Oro (Northeastern University, USA); Ludovico Ferranti (Northeastern University, USA & Sapienza University of Rome, Italy); Leonardo Bonati and Emrecan Demirors (Northeastern University, USA); Zhangyu Guan (University at Buffalo, USA); Tommaso Melodia (Northeastern University, USA); Scott M Pudlewski (Georgia Tech Research Institute, USA) Networks of Unmanned Aerial Vehicles will take a vital role in future Internet of Things and 5G networks. However, how to control UAV networks in an automated and scalable fashion in distributed, interference-prone, and potentially adversarial environments is still an open research problem. We introduce SwarmControl, a new software-defined control framework for UAV wireless networks based on distributed optimization principles. In essence, SwarmControl provides the Network Operator (NO) with a unified centralized abstraction of the networking and flight control functionalities. High-level control directives are then automatically decomposed and converted into distributed network control actions that are executed through programmable software-radio protocol stacks. SwarmControl (i) constructs a network control problem representation of the directives of the NO; (ii) decomposes it into a set of distributed sub-problems; and (iii) automatically generates numerical solution algorithms to be executed at individual UAVs. We present a prototype of an SDR-based, fully reconfigurable UAV network platform that implements the proposed control framework, based on which we assess the effectiveness and flexibility of SwarmControl with extensive flight experiments. Results indicate that the SwarmControl framework enables swift reconfiguration of the network control functionalities, and it can achieve an average throughput gain of \(159%\) compared to the state-of-the-art solutions. WBF-PS: WiGig Beam Fingerprinting for UAV Positioning System in GPS-denied Environments Pei-Yuan Hong, Chi-Yu Li, Hong-Rong Chang, YuanHao Hsueh and Kuochen Wang (National Chiao Tung University, Taiwan) Unmanned aerial vehicles (UAVs) are being investigated to substitute for labor in many indoor applications, e.g., asset tracking and surveillance, where the global positioning system (GPS) is not available. Emerging autonomous UAVs are also expected to land in indoor or canopied aprons automatically. Such GPS-denied environments require alternative non-GPS positioning methods. Though there have been some vision-based solutions for UAVs, they perform poorly in the scenes with bad illumination conditions, or estimate only relative locations but not global positions. Other common indoor localization methods do not cover UAV factors, such as low power and flying behaviors. To this end, we propose a practical non-GPS positioning system for UAVs, named WPF-PS, using low-power, off-the-shelf WiGig devices. We formulate a 3-dimensional beam fingerprint by leveraging the diversity of available TX/RX beams and the link quality. To augment accuracy, we use the weighted k-nearest neighbors algorithm to overcome partial fingerprint inaccuracy, and applies the particle filtering technique into considering the UAV motion. We prototype the WBF-PS on our UAV platform, and it yields a 90th percentile positioning error of below 1m with both small and large velocity estimation errors. Enrico Natalizio (University of Lorraine/Loria) An Effective Multi-node Charging Scheme for Wireless Rechargeable Sensor Networks Tang Liu (Sichuan Normal University, China); BaiJun Wu (University of Louisiana at Lafayette, USA); Shihao Zhang, Jian Peng and Wenzheng Xu (Sichuan University, China) With the maturation of wireless charging technology, Wireless Rechargeable Sensor Networks (WRSNs) has become a promising solution for prolong network lifetimes. Recently studies propose to employ a mobile charger (MC) to simultaneously charge multiple sensors within the same charging range, such that the charging performance can be improved. In this paper, we aim to jointly optimize the number of dead sensors and the energy usage effectiveness in such multi-node charging scenarios. We achieve this by introducing the partial charging mechanism, meaning that instead of following the conventional way that each sensor gets fully charged in one time step, our work allows MC to fully charge a sensor by multiple times. We show that the partial charging mechanism causes minimizing the number of dead sensors and maximizing the energy usage effectiveness to conflict with each other. We formulate this problem and develop a multi-node temporal spatial partial-charging algorithm (MTSPC) to solve it. The optimality of MTSPC is proved, and extensive simulations are carried out to demonstrate the effectiveness of MTSPC. Energy Harvesting Long-Range Marine Communication Ali Hosseini-Fahraji, Pedram Loghmannia, Kexiong (Curtis) Zeng and Xiaofan Li (Virginia Tech, USA); Sihan Yu (Clemson University, USA); Sihao Sun, Dong Wang, Yaling Yang, Majid Manteghi and Lei Zuo (Virginia Tech, USA) This paper proposes a self-sustaining broadband long-range maritime communication as an alternative to the expensive and slow satellite communications in offshore areas. The proposed system, named Marinet, consists of many buoys. Each of the buoys has two units: an energy harvesting unit and a wireless communication unit. The energy harvesting unit extracts energy from ocean waves to support the operation of the wireless communication unit. The wireless communication unit at each buoy operates in a TV white space frequency band and connects to each other and wired high-speed gateways on land or islands to form a mesh network. The resulting mesh network provides wireless access services to marine users in their range. A prototype of the energy harvesting unit and the wireless communication unit are built and tested in the field. In addition, to ensure Marinet will maintain stable communications in rough sea states, an ocean-link-state prediction algorithm is designed. The algorithm predicts ocean link-states based on ocean wave movements. A realistic ocean simulator is designed and used to evaluate how such a link-state prediction algorithm can improve routing algorithm performance. Maximizing Charging Utility with Obstacles through Fresnel Diffraction Model Chi Lin and Feng Gao (Dalian University of Technology, China); Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Jiankang Ren, Lei Wang and Guowei WU (Dalian University of Technology, China) Benefitting from the recent breakthrough of wireless power transfer technology, Wireless Rechargeable Sensor Networks (WRSNs) have become an important research topic. Most prior arts focus on system performance enhancement in an ideal environment that ignores impacts of obstacles. This contradicts with practical applications in which obstacles can be found almost anywhere and have dramatic impacts on energy transmission. In this paper, we concentrate on the problem of charging a practical WRSN in the presence of obstacles to maximize the charging utility under specific energy constraints. First, we propose a new theoretical charging model with obstacles based on Fresnel diffraction model, and conduct experiments to verify its effectiveness. Then, we propose a spatial discretization scheme to obtain a finite feasible charging position set for MC, which largely reduces computation overhead. Afterwards, we re-formalize charging utility maximization with energy constraints as a submodular function maximization problem and propose a cost-efficient algorithm with approximation ratio \(\frac{(e-1)}{2e}(1-\varepsilon)\) to solve it. Lastly, we demonstrate that our scheme outperforms other algorithms by at least \(14.8%\) in terms of charging utility through test-bed experiments and extensive simulations. Placing Wireless Chargers with Limited Mobility Haipeng Dai (Nanjing University & State Key Laboratory for Novel Software Technology, China); Chaofeng Wu, Xiaoyu Wang and Wanchun Dou (Nanjing University, China); Yunhuai Liu (Peking University, China) This paper studies the problem of Placing directional wIreless chargers with Limited mObiliTy (PILOT), that is, given a budget of mobile directional wireless chargers and a set of static rechargeable devices on a 2D plane, determine deployment positions, stop positions and orientations, and portions of time for all chargers such that overall charging utility of all devices can be maximized. To the best of our knowledge, we are the first to study placement of mobile chargers. To address PILOT, we propose a (1/2−ε)-approximation algorithm. First, we present a method to approximate nonlinear charging power of chargers, and further propose an approach to construct Maximal Covered Set uniform subareas to reduce the infinite continuous search space for stop positions and orientations to a finite discrete one. Second, we present geometrical techniques to further reduce the infinite solution space for candidate deployment positions to a finite one without performance loss, and transform PILOT to a mixed integer nonlinear programming problem. Finally, we propose a linear programming based greedy algorithm to address it. Simulation and experimental results show that our algorithm outperforms five comparison algorithms by 23.11% ∼ 281.10%. Cong Wang (Old Dominion University) CoLoRa: Enable Muti-Packet Reception in LoRa Shuai Tong, Zhenqiang Xu and Jiliang Wang (Tsinghua University, China) Long Range (LoRa), more generically Low-Power Wide Area Network (LPWAN), is a promising platform to connect Internet of Things. It enables low-cost low-power communication at a few kbps over upto tens of kilometers with 10-year battery lifetime. However, practical LPWAN deployments suffer from collisions given the dense deployment of devices and wide coverage area. We propose CoLoRa, a protocol to decompose large numbers of concurrent transmissions from one collision in LoRa networks. At the heart of CoLoRa, we utilize packet time offset to disentangle collided packets. CoLoRa incorporates several novel techniques to address practical challenges. (1) We translate time offset, which is difficult to measure, to frequency features that can be reliably measured. (2) We propose a method to cancel inter-packet interference and extract accurate feature from low SNR LoRa signal. (3) We address frequency shift incurred by CFO and time offset for LoRa decoding. We implement CoLoRa on USRP N210 and evaluate its performance in both indoor and outdoor networks. CoLoRa is implemented in software at the base station and it can work for COTS LoRa nodes. The evaluation results show that CoLoRa improves the network throughput by 3.4\(\times\) compared with Choir and by 14\(\times\) compared with LoRaWAN. DyLoRa: Towards Energy Efficient Dynamic LoRa Transmission Control Yinghui Li, Jing Yang and Jiliang Wang (Tsinghua University, China) LoRa has been shown as a promising platform to provide low-power long-range communication with a low data rate for connecting IoT devices. LoRa can adjust transmission parameters including transmission power and spreading factor, leading to different noise resilience, transmission range and energy consumption. Existing LoRa transmission control approaches can hardly achieve optimal energy efficiency. This leads to a gap to the optimal solution. In this paper, we propose DyLoRa, a dynamic LoRa transmission control system to optimize energy efficiency. The main challenge is very limited data rate of LoRa, making it time- and energy-consuming to obtain link statistics. We show that the demodulation symbol error rate can be stable and thus derive the model for symbol error rate. We further derive the energy efficiency model based on the symbol error model. DyLoRa can derive parameter settings for optimal energy efficiency even from a single packet. We also adapt the model to different hardware to compensate the deviation. We implement DyLoRa based on LoRaWAN 1.0.2 with SX1276 LoRa node and SX1301 LoRa gateway. We evaluate DyLoRa with 11 real deployed nodes. The evaluation results show that DyLoRa improves the energy efficiency by up to 103% compared with the state-of-the-art LoRaWAN ADR. LiteNap: Downclocking LoRa Reception Xianjin Xia and Yuanqing Zheng (The Hong Kong Polytechnic University, Hong Kong); Tao Gu (RMIT University, Australia) This paper presents LiteNap which improves the energy efficiency of LoRa by enabling LoRa nodes to operate in a downclocked `light sleep' mode for packet reception. A fundamental limit that prevents radio downclocking is the Nyquist sampling theorem which demands the clock-rate being at least twice the bandwidth of LoRa chirps. Our study reveals under-sampled LoRa chirps suffer frequency aliasing and cause ambiguity in symbol demodulation. LiteNap addresses the problem by leveraging an empirical observation that the hardware of LoRa radio can cause phase jitters on modulated chirps, which result in frequency leakage in the time domain. The timing information of phase jitters and frequency leakages can serve as physical fingerprints to uniquely identify modulated chirps. We propose a scheme to reliably extract the fingerprints from under-sampled chirps and resolve ambiguities in symbol demodulation. We implement LiteNap on a software defined radio platform and conduct trace-driven evaluation. Experiment results show that LiteNap can downclock LoRa nodes to sub-Nyquist rates for energy savings (\eg, 1/8 of Nyquist rate), without substantially affecting packet reception performance (\eg, $>$95% packet reception rate). Online Concurrent Transmissions at LoRa Gateway Zhe Wang, Linghe Kong and Kangjie Xu (Shanghai Jiao Tong University, China); Liang He (University of Colorado Denver, USA); Kaishun Wu (Shenzhen University, China); Guihai Chen (Shanghai Jiao Tong University, China) Long Range (LoRa) communication, thanks to its wide network coverage and low energy operation, has attracted extensive attentions from both academia and industry. However, existing LoRa-based Wide Area Network (LoRaWAN) suffers from severe inter-network interference, due to the following two reasons. First, the densely-deployed LoRa ends usually share the same network configurations, such as spreading factor (SF), bandwidth (BW) and carrier frequency (CF), causing interference when operating in the vicinity. Second, LoRa is tailored for low-power devices, which excludes LoRaWAN from using the listen-before-talk (LBT) mechanisms- LoRaWAN has to use the duty-cycled medium access policy and thus being incapable of channel sensing or collision avoidance. To mitigate the inter-network interference, we propose a novel solution achieving the online concurrent transmissions at LoRa gateway, called OCT, which can be easily deployed at LoRa gateway. We have implemented/evaluated OCT on USRP platform and commodity LoRa ends, showing OCT achieves: (i) >90% packet reception rate (PRR), (ii) 3× 10−3 bit error rate (BER), (iii) 2x and 3x throughput in the scenarios of two- and three- packet collisions respectively, and (iv) reducing 67% latency compared with state-of-the-art. Swarun Kumar (Carnegie Mellon University) Session 10-F WiFi and Wireless Sensing Joint Access Point Placement and Power-Channel-Resource-Unit Assignment for 802.11ax-Based Dense WiFi with QoS Requirements Shuwei Qiu, Xiaowen Chu, Yiu-Wing Leung and Joseph Kee-Yin Ng (Hong Kong Baptist University, Hong Kong) IEEE 802.11ax is a promising standard for the next-generation WiFi network, which uses orthogonal frequency division multiple access (OFDMA) to segregate the wireless spectrum into time-frequency resource units (RUs). In this paper, we aim at designing an 802.11ax-based dense WiFi network to provide WiFi services to a large number of users within a given area with the following objectives: (1) to minimize the number of access points (APs); (2) to fulfil the user's throughput requirement; and (3) to be resistant to AP failures. We formulate the above into a joint AP placement and power-channel-RU assignment optimization problem, which is NP-hard. To tackle this problem, we first derive an analytical model to estimate each user's throughput under the mechanism of OFDMA and a widely used interference model. We then design a heuristic algorithm to find high-quality solutions with polynomial time complexity. Simulation results show that our algorithm can achieve the optimal performance for a small area of 50 x 50 m2. For a larger area of 100 x 80 m2 where we cannot find the optimal solution through an exhaustive search, our algorithm can reduce the number of APs by 32 - 55% as compared to the random and Greedy solutions. Machine Learning-based Spoofing Attack Detection in MmWave 60GHz IEEE 802.11ad Networks Ning Wang and Long Jiao (George Mason University, USA); Pu Wang (Xidian University, China); Weiwei Li (Hebei University of Engineering, China & George Mason University, USA); Kai Zeng (George Mason University, USA) Spoofing attacks pose a serious threat to wireless communications. Exploiting physical-layer features to counter spoofing attacks is a promising technology. Although various physical-layer spoofing attack detection (PL-SAD) techniques have been proposed for conventional 802.11 networks in the sub-6GHz band, the study of PL-SAD for 802.11ad networks in 5G millimeter wave (mmWave) 60GHz band is largely open. In this paper, we propose a unique physical layer feature in IEEE 802.11ad networks, i.e., the signal-to-noise-ratio (SNR) traces in the sector level sweep (SLS) of beam pattern selections, to achieve PL-SAD. The proposed schemes are based on the observation that each 802.11ad device presents distinctive beam patterns in the beam sweeping process, which results in distinguishable SNR traces. Based on these observations, we present a novel neural network framework, named BNFN-framework, that can tackle small samples learning and allow for quick construction. The BNFN-framework consists of a backpropagation neural network and a fast forward propagation neural network. Generative adversarial networks (GANs) are introduced to optimize these neural networks. We conduct experiments using off-the-shelf 802.11ad devices, Talon AD7200s and MG360, to evaluate the performance of the proposed PL-SAD scheme. Experimental results confirm the effectiveness of the proposed PL-SAD scheme under different scenarios. MU-ID: Multi-user Identification Through Gaits Using Millimeter Wave Radios Xin Yang (Rutgers University, USA); Jian Liu (The University of Tennessee, Knoxville, USA); Yingying Chen (Rutgers University, USA); Xiaonan Guo and Yucheng Xie (Indiana University-Purdue University Indianapolis, USA) Multi-user identification could facilitate various large-scale identity-based services such as access control, automatic surveillance system, and personalized services, etc. Although existing solutions can identify multiple users using cameras, such vision-based approaches usually raise serious privacy concerns and require the presence of line-of-sight. Differently, in this paper, we propose MU-ID, a gait-based multi-user identification system leveraging a single commercial off-the-shelf (COTS) millimeter-wave (mmWave) radar. Particularly, MU-ID takes as input frequency-modulated continuous-wave (FMCW) signals from the radar sensor. Through analyzing the mmWave signals in the range-Doppler domain, MU-ID examines the users' lower limb movements and captures their distinct gait patterns varying in terms of step length, duration, instantaneous lower limb velocity, and inter-lower limb distance, etc. Additionally, an effective spatial-temporal silhouette analysis is proposed to segment each user's walking steps. Then, the system identifies steps using a Convolutional Neural Network (CNN) classifier and further identifies the users in the area of interest. We implement MU-ID with the TI AWR1642BOOST mmWave sensor and conduct extensive experiments involving 10 people. The results show that MU-ID achieves up to 97% single-person identification accuracy, and over 92% identification accuracy for up to four people, while maintaining a low false positive rate. SmartBond: A Deep Probabilistic Machinery for Smart Channel Bonding in IEEE 802.11ac Raja Karmakar and Samiran Chattopadhyay (Jadavpur University, India); Sandip Chakraborty (Indian Institute of Technology Kharagpur, India) Dynamic bandwidth operation in IEEE 802.11ac helps wireless access points to tune channel widths based on carrier sensing and bandwidth requirements of associated wireless stations. However, wide channels result in a reduction in the carrier sensing range, which leads to the problem of channel sensing asymmetry. As a consequence, access points face hidden channel interference that may lead to as high as 60% reduction in the throughput under certain scenarios of dense deployments of access points. Existing approaches handle this problem by detecting the hidden channels once they occur and affect the channel access performance. In a different direction, in this paper, we develop a method for avoiding hidden channels by meticulously predicting the channel width that can reduce interference as well as can improve the average communication capacity. The core of our approach is a deep probabilistic machinery based on point process modeling over the evolution of channel width selection process. The proposed approach, SmartBond, has been implemented and deployed over a testbed with 8 commercial wireless access points. The experiments show that the proposed model can significantly improve the channel access performance although it is lightweight and does not incur much overhead during the decision making process. Yuanqing Zheng (The Hong Kong Polytechnic University)
CommonCrawl
Journal of Ocean Engineering and Technology (한국해양공학회지) Korean Society of Ocean Engineers (한국해양공학회) Construction/Transportation > Maritime Safety/Transportation Technology Construction/Transportation > Water Engineering System Journal of Ocean Engineering and Technology provides a medium for the publication of original research and development technology in marine/material science and engineering which include: Ocean Engineering; Coastal Engineering; Naval Architecture; Offshore Technology; Frontier Energy Resources Technology; Marine Renewable Energy; Underwater Vehicles and Robotics; Underwater Acoustics and Sonar Technology; Underwater Observation Systems; Underwater Communication and Technology; Marine Equipments; Remote Sensing of Air/Ocean Surface; Ocean Mining; Offshore Mechanics; Marine Hydrodynamics and CFD; Sloshing Dynamics and Design; Vortex-induced Vibration; Cable, Mooring, Buoy Technology; Fluid-Structure Interaction; Hydroelasticity; Risk and Limit State Design and Assessment; Ship Maneuvering; Seakeeping and Control Systems; Ship Resistance and Propulsion; Port and Waterfront Design and Engineering; Geotechnology; Satellite Observation; Shore Protection; Beach Nourishment; Sediment Transport; Mechanics; Safety and Reliability; Subsea; Pipelines; Risers and Positioning; High-Performance Materials; Nano-technologies for Clean Energy; Friction Stir Welding for Oil and Gas; Arctic Materials; Arctic Science and Technology; Advanced Ship Technology; Naval and Offshore Structures and Technology; and Oceanography. http://www.joet.org KSCI KCI Volume 21 Issue 3 Serial No. 76 Volume 13 Issue 3B Improvement of Tidal Circulation in a Closed Bay using Variation of Bottom Roughness BOO SUNG YOUN 1 Tidal circulation in a closed bay using a variation of bottom roughness was investigated through the numerical experiments based on a finite difference multi-level model. Various distributions of bottom roughness in the bay were implemented to determine their effects. It hadbeen determined that residual currents can be generated from the differences of the bottom roughness between streaming and reverse flow directions. The magnitude of residual currents and volume flow rate increase when the relative ratio of bottom roughness between streaming and reverse flow directions increase. Circulation in the closed bay is also improved by the employment of the change of bottom roughness. Flow Characteristics Study around Two Vertical Cylinders SHIN YOUNG S.;JO CHUL-HEE;KIM IN-HO 8 In a multiple array of vertical cylinders, flaw patterns are very complex and very interactive between cylinders. The patterns are turbulent and non-linear depending on various factors. The gap and flow incoming velocity of upstream can affect on the downstream cylinder. In this study, the flaw characteristics around two vertical cylinders are investigated numerically and experimentally. As the gap between cylinders is changed at fixed coming velocity, the pressure distributions around cylinders are observed and compared by experimental and numerical approaches. The F.D.M and multi-block method are applied in the study. The pressures at 12 points around the cylinder are measured in the experiment. The results can be applied in the understanding and design of multiple pile array structures. Analysis of Pollutant Loads and Physical Oceanographic Status at the Developing Region of Deep Sea Water in the East Sea LEE IN-CHEOL;YOON BAN-SAM 14 As a basic study for establishing the input conditions of a forecasting/estimating model, used for deep-sea water drainage to the ocean, this study was carried out as follows: 1) estimating the amount of river discharge and pollutant loads into the developing region of deep sea water in the East Sea, Korea, 2) a field observation of tidal current, vertical water temperature, and salinity distribution, 3) 3-D numerical experiment of tidal current to analyze the physical oceanographic status. The amount of river discharge flowing into this study area was estimated at about $462.7{\times}103 m\^3/day$ of daily mean in 2002. Annual mean pollutant load of COD, TN, and TP were estimated at 7.02 ton-COD/day, 4.06 ton-TN/day, and 0.39 ton/day, respectively. Field observation of tidal current normally shows 20-40cm/sec of current velocity at the surface layer, and it decreases under 20cm/sec as the water depth increases. We also found a stratification condition at around 30m water depth in the observation area. The differences in water temperature and salinity, between the surface layer and the bottom layer, were about 18 C and 0.8 psu, respectively. On the other hand, we found a definite trend of 34 psu salinity water mass in the deep sea region. A Study of Structure-Fluid Interaction Technique for Submarine LOX Tank under Impact Load of Underwater Explosion KIM JAE-HYUN;PARK MYUNG-KYU 20 The authors performed the underwater explosion analysis for the liquified oxygen tank - a kind of fuel tank of a mid-size submarine, and tried to verify the structural safety for this structure. First, the authors reviewed the theory and application of underwater explosion analysis, using a Structure-Fluid Interaction technique and its finite element modeling scheme. Next, the authors modeled the explosive and sea water as fluid elements, the LOX tank as structural elements, and the interface between the two regions as the ALE scheme. The effect on shock pressure and impulse of fluid mesh size and shape are also investigated. Upon analysis, it was found that the shock pressure due to explosion propagated into the water region, and hit the structure region. The plastic deformation and the equivalent stress were apparent at the web frame and the shock mount of LOX structure, but these values were acceptable for the design criteria. Probabilistic Seismic Hazard Analysis of Caisson-Type Breakwaters KIM SANG-HOON;KIM DOO-KIE 26 Recent earthquakes, measuring over a magnitude of 5.0, on the eastern coast of Korea, have aroused interest in earthquake analyses and the seismic design of caisson-type breakwaters. Most earthquake analysis methods, such as equivalent static analysis, response spectrum analysis, nonlinear analysis, and capacity analysis, are deterministic and have been used for seismic design and performance evaluation of coastal structures. However, deterministic methods are difficult for reflecting on one of the most important characteristics of earthquakes, i.e. the uncertainty of earthquakes. This paper presents results of probabilistic seismic hazard assessment(PSHA) of an actual caisson-type breakwater, considering uncertainties of earthquake occurrences and soil properties. First, the seismic vulnerability of a structure and the seismic hazard of the site are evaluated, using earthquake sets and a seismic hazard map; then, the seismic risk of the structure is assessed. Thermal Deformation of Curved Plates by Line Heating LEE JOO-SUNG;LIM DONG-YONG 33 It has been well documented that plate forming is one of the most important processes in shipbuilding. In the most shipyards, the line heating method is primarily used for plate forming. Since the heating process is carried out for the curved plate and not for the flat plate, a curvature effect on the final deformation must be considered in deriving the simplified prediction models for deformation. This paper investigates the effect of curvature along the heating line on the deformation of the plate. First of all, results of numerical analysis are compared with these of a line-heating test, to justify the elasto-plastic analysis procedure for the present study, which shows good agreement. Then, the present numerical procedure is applied to flat and curved plate models, to investigate the curvature effect on the heat transfer characteristics and deformation by line heating. Damage of Composite Laminates by Low-Velocity Impact AHN SEOK-HWAN;KIM JIN-WOOK;DO JAE-YOON;KIM HYUN-SOO;NAM KI-WOO 39 The study investigated the nondestructive characteristics of damage, caused by law-velocity impact, on symmetric cross-ply laminates, composed of [0o/90o]16s, 24s, 32s, 48s. The thickness of the laminates was 2, 3, 4 and 6 mm, respectively. The impact machine used, Model 8250 Dynatup Instron, was a drop-weight type that employed gravity. The impact velocities used in this experiment were 0.75, 0.90, 1.05, 1.20 and 1.35 m/sec, respectively. Both the load and the deformation increased when the impact velocity was increased. Further, when the load increased with the laminate thickness in the same impact velocity, the deformation still decreased. The extensional velocity was quick, as the laminate thickness increased in the same impact velocity and the impact velocity increased in the same laminate thickness. In the ultrasonic scans, the damaged area represented a dimmed zone. This is due to the fact that the wave, after the partial reflection by the deflects, does not have enough energy to touch the opposite side or to come back from it. The damaged laminate areas differed, according to the laminate thickness and the impact velocity. The extensional velocities are lower in the 0o direction and higher in the 90o direction, when the size of the defect increases. However, it was difficult to draw any conclusion for the extensional velocities in the 45o direction. Dissimilar Friction Welding for Marine Shock Absorber Steels and its Evaluation by Acoustic Emission LEE BAE-SUB;KONG YU-SIK;KIM SEON-JIN 44 The shock absorbers for marine vehicles are very important components to absorbing the shock resulting from driving. Depending on the kinds of vehicles, these essential components, piston and piston rod, must be made of S25C, S45C, and SCM440, must be precisely machined, and assembled by the bolts. Other materials used have been difficult to weld, and could be unstable in quality, by the conventional arc welding. Also, they have been associated with a lot of technical problems in manufacturing. However, using the friction welding technique, such problems will be avoided. These factors have necessitated the domestic development of the marine shock absorber using a friction welding, as well as stimulating a new approach to the study of real-time weld quality evaluation by AE techniques. A Study on the Characteristic Change of 2.25Cr-1Mo Steel Welds for Various Welding Processes BANG HAN-SUR;OH CHONG-IN;BANG HEE-SUN;KIM HYUNG 49 In spite of the merits of laser welding being able to obtain the high welding quality such as smaller width of melting and heat affected zone, smaller welding deformation and fine grains of weldment compared to arc welding, laser welding is mainly used in joining of thin steel parts of electronics industry. Laser welding is getting widely used in joining thick plate and special kinds of steel due to its high power. While the arc welding is still applied for 2.25Cr-1Mo steel which is the essential material of atomic power generation equipment, the laser welding is not yet applied despite its high quality. So it has a trial to a special case demanding high welding quality such as atomic power plant. Accordingly, in this research, the mechanical properties of weldments by arc and laser welding were investigated using FEM to confirm the applicability of laser welding to 2.25Cr-1Mo steel. The Charphy test was carried out to understand the effect on the fracture toughness of weldments. The results of examination and test of the mechanical properties showed the validity of this research. Mechanical Characteristics of Hybrid Fiber Reinforced Composite Rebar HAW GIL-YOUNG;AHN DONG-GUE;LEE DONG-GI 57 The objective of this research is to investigate the mechanical characteristics of the hybrid fiber reinforced composite rebar, which is manufactured from a braidtrusion process. Braidtrusion is a direct composite fabrication technique, utilizing in-line brading and the pultrusion process. hz order to obtain the mechanical behavior of the glass fiber, carbon fiber, and kevlar fiber, the tensile tests are carried out. The results of the fibers are compared with that of steel. Hybrid rebar specimens with various diameters, ranging from model size (3 mm) to full-scale size (9.5 mm), and various cross sections, such as solid and hollow shape, have been manufactured from the braidtrusion process. The tensile and bending tests for the case of the hybrid rebar, the conventional GFRP rebar, and the steel bar have been carried out. The results of the experiments show that the hybrid rebar is superior to the conventional GFRP rebar and the steel bar, from the viewpoint of tensile and bending characteristics. Numerical Modeling and Experimental Verification for Target Strength of Submerged Objects CHOI YOUNG-HO;SHIN KEE-CHUL;YOU JIN-SU;KIM JEA-SOO;JOO WON-HO;KIM YOUNG-HYUN;PARK JONG-HYUN;CHOI SANG-MUN;KIM WOO-SHIK 64 Target Strength(TS) is an important factor for the detection of the target in an active sonar system: thus the numerical model for the prediction of TS is widely being developed. For the frequency range of several kHz, the most important scattering mechanism is known to be specular reflection, which is largely affected by the geometrical shape of the target. In this paper, a numerical algorithm to predict TS is developed based on the Kirchhoff approximation which is computationally efficient. The developed algorithm is applied to the canonical targets of simple shapes, for which the analytical solutions exist. The numerical results show good agreement with the analytical solutions. Also, the algorithm is applied to more complex scatterers, and is compared with the experimental data obtained in the water tank experiment for the purpose of verifying the developed numerical model. Discussions on the effect of spatial sampling and other aspects of numerical m odeling are presented. A Basic Structural Design for Large Floating Crane PARK CHAN-HU;KIM BYUNG-WOO;HA MUN-KEUN;CHUN MIN-SUNG 71 This paper describes basic structural design for the large floating crane barge of fixed undulation type. Structural analysis was performed separately after dividing the floating crane into two parts, The crane part was composed of jib boom, back stay and back tower and the barge part supported the crane part. The structural strength for jib boom structural members are in compliance with JIS B 8821 and scantling of all barge structural members are in compliance with the requirement of KR (Korean Register of Shipping) Steel Barges and Rules for Classification of Steel Ships. For the structural analysis of large floating crane, MSC/NASTRAN and MSC/PATRAN software were used. Effects of Fleet-Angle on Sway Motions of a Cargo: Design Force Calculation SHIN JANG-RYONG;PARK YONG-HYUN;GOH SUNG-HEE;HONG KEUM-SHIK 77 Over the last 10 years, significant changes have taken place in the world of container shipping. The size and the speed of the quay-side crane have been increased considerably. As a result, the stiffness of a crane is decreased and the sway oscillation of cargo may become violent. The purpose of this paper is to determine the design force caused by the sway oscillation of the cargo, lifted by four ropes, with an initial fleet angle, and the governing equations of the lifting system for an anti-sway control system design. Anti-Sway Control of a Jib Crane Using Time Optimal Control KANG MIN-WOO;HONG KEUM-SHIK 87 This paper investigates the constant-level luffing and time optimal control of jib cranes. The constant-level luffing, which is the sustainment of the load at a constant height during luffing, is achieved by analyzing the kinematic relationship between the angular displacement of a boom and that of the main hoist motor of a jib crane. Under the assumption that the main body of the crane does not rotate, the equations of motion of the boom are derived using Newton's Second Law. The dynamic equations for the crane system are highly nonlinear; therefore, they are linearized under the small angular motion of the load to apply linear control theory. This paper investigates the time optimal control from the perspective of no-sway at a target point. A stepped velocity pattern is used to design the moving path of the jib crane. Simulation results demonstrate the effectiveness of the time optimal control, in terms of anti-sway motion of the load, while luffing the crane. Design Optimization of Pressure Vessel of Small Autonomous Underwater Vehicle CHUNG TAE-HWAN;HO IN-SIKN;LEE PAN-MOOK;LEE CHONGMOO;LIM YONGGON 95 This paper presents the optimum design of cylindrical shell under external pressure loading. Two kinds of material, Al7075-T6, Ti-6Al-4V, are considered. For each material, the design variable is a thickness of the unstiffened parallel middle body shell, and the state variable, constraint, is hoop stress and the object .function is total weight of the cylindrical shell. Optimization is performed by conventional FE Program, ANSYS. In addition, buckling analysis is performed for the middle body of the cylindrical shell. Finally, we calculates the payload of the cylindrical shell to keep neutral buoyancy with optimized thickness in deep-sea applications.
CommonCrawl
On absolutely continuous curves of probabilities on the line Nonlinear stability of pulse solutions for the discrete FitzHugh-Nagumo equation with infinite-range interactions September 2019, 39(9): 5085-5103. doi: 10.3934/dcds.2019206 Shifts of finite type and random substitutions Philipp Gohlke 1, , Dan Rust 1, and Timo Spindeler 2, Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld, Germany Department of Mathematical and Statistical Sciences, 632 CAB, University of Alberta, Edmonton, AB, T6G 2G1, Canada Received May 2018 Revised January 2019 Published May 2019 We prove that every topologically transitive shift of finite type in one dimension is topologically conjugate to a subshift arising from a primitive random substitution on a finite alphabet. As a result, we show that the set of values of topological entropy which can be attained by random substitution subshifts contains the logarithm of all Perron numbers and so is dense in the positive real numbers. We also provide an independent proof of this density statement using elementary methods. Keywords: Random substitutions, shifts of finite type, topological entropy. Mathematics Subject Classification: Primary: 37B10, 37A50; Secondary: 37B40, 52C23. Citation: Philipp Gohlke, Dan Rust, Timo Spindeler. Shifts of finite type and random substitutions. Discrete & Continuous Dynamical Systems, 2019, 39 (9) : 5085-5103. doi: 10.3934/dcds.2019206 M. Baake and U. Grimm, Aperiodic Order. Volume 1: A Mathematical Invitation, vol. 149 of Encyclopedia Math. Appl., Cambridge Univ. Press, 2013. doi: 10.1017/CBO9781139025256. Google Scholar M. Baake, T. Spindeler and N. Strungaru, Diffraction of compatible random substitutions in one dimension, Indag. Math. (N.S.), 29 (2018), 1031-1071. doi: 10.1016/j.indag.2018.05.008. Google Scholar F. M. Dekking and R. W. J. Meester, On the structure of Mandelbrot's percolation process and other random Cantor sets, J. Stat. Phys., 58 (1990), 1109-1126. doi: 10.1007/BF01026566. Google Scholar R. Diestel, Graph Theory, vol. 173 of Graduate Texts in Mathematics, 5th edition, Springer, Berlin, 2017. doi: 10.1007/978-3-662-53622-3. Google Scholar T. Fernique and N. Ollinger, Combinatorial substitutions and sofic tilings, Proceedings of JAC, (2010), 100-110. Google Scholar N. P. Fogg, Substitutions in Dynamics, Arithmetics and Combinatorics, vol. 1794 of Lecture Notes in Math., Springer-Verlag, 2002. doi: 10.1007/b13861. Google Scholar F. Gähler and E. Miro, Topology of the random fibonacci tiling space, Acta Phys. Pol. A, 126 (2014), 564-567. Google Scholar C. Godrèche and J. M. Luck, Quasiperiodicity and randomness in tilings of the plane, J. Stat. Phys., 55 (1989), 1-28. doi: 10.1007/BF01042590. Google Scholar P. Gohlke, On a family of semi-compatible random substitutions, Masters Thesis, Bielefeld University, 2017. Google Scholar C. Goodman-Strauss, Matching rules and substitution tilings, Annals of Mathematics, 147 (1998), 181-223. doi: 10.2307/120988. Google Scholar O. D. Jones, Large deviations for supercritical multitype branching processes, J. Appl. Probab., 41 (2004), 703-720. doi: 10.1239/jap/1091543420. Google Scholar D. Lind, The entropies of topological Markov shifts and a related class of algebraic integers, Ergodic Th. Dynam. Syst., 4 (1984), 283-300. doi: 10.1017/S0143385700002443. Google Scholar [13] D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge Univ. Press, 1995. doi: 10.1017/CBO9780511626302. Google Scholar M. Moll, On a family of random noble means substitutions, Ph.D. Thesis, Bielefeld University, URL http://pub.uni-bielefeld.de/publication/2637807. Google Scholar M. Moll, Diffraction of random noble means words, J. Stat. Phys., 156 (2014), 1221-1236. doi: 10.1007/s10955-014-1047-2. Google Scholar J. Nilsson, On the entropy of a family of random substitutions, Monatsh. Math., 168 (2012), 563-577. doi: 10.1007/s00605-012-0401-1. Google Scholar J. Peyrière, Substitutions aléatoires itérés, Sémin. Théor. Nombres, 1–9, URL http://www.jstor.org/stable/44166375. Google Scholar M. Queffélec, Substitution Dynamical Systems–Spectral Analysis, vol. 1294 of Lecture Notes in Mathematics, 2nd edition, Springer-Verlag, Berlin, 2010. doi: 10.1007/978-3-642-11212-6. Google Scholar G. Rozenberg and A. Salomaa, The mathematical theory of L systems, Advances in Information Systems Science, Plenum Press, New York, 6 (1976), 161–206. Google Scholar D. Rust and T. Spindeler, Dynamical systems arising from random substitutions, Indag. Math. (N.S.), 29 (2018), 1131-1155. doi: 10.1016/j.indag.2018.05.013. Google Scholar Figure 1. Graph $ G_{A} $ of the SFT $ X_{A} $ in Example 5.8 Figure 2. Graph $ G $ with labelled edges for Example 5.11 Kevin McGoff, Ronnie Pavlov. Random $\mathbb{Z}^d$-shifts of finite type. Journal of Modern Dynamics, 2016, 10: 287-330. doi: 10.3934/jmd.2016.10.287 Bing Li, Tuomas Sahlsten, Tony Samuel. Intermediate $\beta$-shifts of finite type. Discrete & Continuous Dynamical Systems, 2016, 36 (1) : 323-344. doi: 10.3934/dcds.2016.36.323 Dominik Kwietniak. Topological entropy and distributional chaos in hereditary shifts with applications to spacing shifts and beta shifts. Discrete & Continuous Dynamical Systems, 2013, 33 (6) : 2451-2467. doi: 10.3934/dcds.2013.33.2451 Azmeer Nordin, Mohd Salmi Md Noorani. Counting finite orbits for the flip systems of shifts of finite type. Discrete & Continuous Dynamical Systems, 2021, 41 (10) : 4515-4529. doi: 10.3934/dcds.2021046 Kazuhiro Kawamura. Mean dimension of shifts of finite type and of generalized inverse limits. Discrete & Continuous Dynamical Systems, 2020, 40 (8) : 4767-4775. doi: 10.3934/dcds.2020200 Anthony Quas, Terry Soo. Weak mixing suspension flows over shifts of finite type are universal. Journal of Modern Dynamics, 2012, 6 (4) : 427-449. doi: 10.3934/jmd.2012.6.427 Marcelo Sobottka. Topological quasi-group shifts. Discrete & Continuous Dynamical Systems, 2007, 17 (1) : 77-93. doi: 10.3934/dcds.2007.17.77 Felix X.-F. Ye, Hong Qian. Stochastic dynamics Ⅱ: Finite random dynamical systems, linear representation, and entropy production. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 4341-4366. doi: 10.3934/dcdsb.2019122 Christopher Hoffman. Subshifts of finite type which have completely positive entropy. Discrete & Continuous Dynamical Systems, 2011, 29 (4) : 1497-1516. doi: 10.3934/dcds.2011.29.1497 Silvère Gangloff. Characterizing entropy dimensions of minimal mutidimensional subshifts of finite type. Discrete & Continuous Dynamical Systems, 2022, 42 (2) : 931-988. doi: 10.3934/dcds.2021143 Rafael Alcaraz Barrera. Topological and ergodic properties of symmetric sub-shifts. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4459-4486. doi: 10.3934/dcds.2014.34.4459 Katrin Gelfert. Lower bounds for the topological entropy. Discrete & Continuous Dynamical Systems, 2005, 12 (3) : 555-565. doi: 10.3934/dcds.2005.12.555 Jaume Llibre. Brief survey on the topological entropy. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3363-3374. doi: 10.3934/dcdsb.2015.20.3363 Dongkui Ma, Min Wu. Topological pressure and topological entropy of a semigroup of maps. Discrete & Continuous Dynamical Systems, 2011, 31 (2) : 545-557 . doi: 10.3934/dcds.2011.31.545 Piotr Oprocha, Paweł Potorski. Topological mixing, knot points and bounds of topological entropy. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3547-3564. doi: 10.3934/dcdsb.2015.20.3547 Manfred Denker, Yuri Kifer, Manuel Stadlbauer. Thermodynamic formalism for random countable Markov shifts. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 131-164. doi: 10.3934/dcds.2008.22.131 Manfred Denker, Yuri Kifer, Manuel Stadlbauer. Corrigendum to: Thermodynamic formalism for random countable Markov shifts. Discrete & Continuous Dynamical Systems, 2015, 35 (1) : 593-594. doi: 10.3934/dcds.2015.35.593 Timothy Chumley, Renato Feres. Entropy production in random billiards. Discrete & Continuous Dynamical Systems, 2021, 41 (3) : 1319-1346. doi: 10.3934/dcds.2020319 Boris Hasselblatt, Zbigniew Nitecki, James Propp. Topological entropy for nonuniformly continuous maps. Discrete & Continuous Dynamical Systems, 2008, 22 (1&2) : 201-213. doi: 10.3934/dcds.2008.22.201 Michał Misiurewicz. On Bowen's definition of topological entropy. Discrete & Continuous Dynamical Systems, 2004, 10 (3) : 827-833. doi: 10.3934/dcds.2004.10.827 HTML views (190) Philipp Gohlke Dan Rust Timo Spindeler
CommonCrawl
History of Science and Mathematics Meta History of Science and Mathematics Stack Exchange is a question and answer site for people interested in the history and origins of science and mathematics. It only takes a minute to sign up. Why are permutations ($_nP_r$) called differently in non-English languages ("variations" in German)? First of all, you should be at least a little familiar with combinatorics to understand that question. Some often used calculator keys in stochastic are the nCr and nPr ones. Edit: I've first asked this question for the German, English Stackexchange version (both places where "mathematics" tags exist) and usual Math Stackoverflow, but as it turned out this is likely also the case for all non-English languages and the community here may be better suited to answer the question, I've also posted it here. Despite that, below the German language is referred to as an example, where it is called differently. Edit2: Also posted at History of Science and Mathematics and linguistics. Edit3: In French this seems to be called "arrangement". And in Russian "razmeshhenie", "which also means arrangement". $_nC_r$ = combinations nCr is quite obvious. The "C" stands for "combinations" (actually those without repetition) and this is how they are called in German and English. That is just the binomial coefficient: $$\binom{n}{k}=\frac{n!}{k!(n-k)!}=_n\!\!C_k$$ $_nP_r$ = permutations (English)/variations (German) Keeping that knowledge in mind, as a German, you would assume nPr is for calculating the permutations (without repetition, again), i.e. just: $$n!$$ However, that's not the case, actually it calculates the "variation", as it's called in German: $$\frac{n!}{(n-k)!} =_n\!\!C_r$$ And it is true: Actually the "P" does stand for "permutation" in English. So the last formula is what they call "permutation". Just different names? So we could say, these are just different names, but no, it gets more complicated, because – using the German terms here again – permutations are just a special kind of variations. Essentially, it's the last formula, where k=n, i.e. you choose all items and do not select a subset when arranging them. Obviously the English mathematics do not use the term "permutations" for the specific version we name it in German, but for the general version. Essentially this leads to another problem, however, when we look at nPr with repetition. All examples before where without repetition, but you have formulas for the ones with repetition, too. So the "permutation with repetition"/"Variation mit Wiederholung" and is easy to calculate, you just: $$n^k$$ Wikipedia does not seem to want to acknowledge the English term for that saying they have "sometimes been referred to" in this way… (Or is this actually something different as the formula is k^n?) Anyway, if we assume the term is used like that, we've got another way to have German "Permutationen" "with repetition". This time, however, as in the German definition of permutations we do not select items, we just have multiple of the same items. So e.g. you have r, s, …, t same elements in n elements you get a formula like that: $$\frac{k!}{r!\cdot s! \cdots t!}$$ And this is what we call "Permutation mit Wiederholung" in German. But what term is then used in the English for this kind of "repetition"? So how did this inconsistent naming across languages happen? Is there any "correct" term or has one term been invented before another one, so someone adapted it wrong? Do other languages possibly also name it differently, i.e. is the German naming the exception or the English one? And what term is used for "Permutationen mit Wiederholung"/same elements in a set in English then? If you need some more understanding: Here are the formulas for the calculator keys Here is another overview about the subject as Germans see it. Edit: I found something: The English Wikipedia describes the term "variations" as: Variations without repetition, an archaic term in combinatorics still commonly used by non-English authors for k-permutations of n Variations with repetition, an archaic term in combinatorics still commonly used by non-English authors for n-tuples Despite that sounding a little pejorative to me as a German speaker, it raises the question of whether this is really (internationally?) deprecated/outdated? Or what term is supposed to be used? Also the relation to tuples, which are – I thought – just a different concept of a list of numbers, is not clear to me. After all, I could not found any of the formulas I've just mentioned in the linked article. mathematics terminology language combinatorics Conifold rugkrugk $\begingroup$ Inconsistent meanings for literal translations is hardly limited to mathematics. British vs. USA biscuits and the French "monter" vs. "monter dans" are two well-known examples. Idioms make their way into math just as easily as into general language. $\endgroup$ $\begingroup$ I think they are often called differently even in English. The terms "permutation" and "combination"; and the notations ${}_nC_k$ and ${}_nP_k$ are found only in certain textbooks (especially at high-school level or below), but are unknown otherwise. $\endgroup$ – Gerald Edgar $\begingroup$ @GeraldEdgar Maybe it's different in text books, but it is certainly written like that on all calculators I know. Internationally, AFAIK. $\endgroup$ – rugk A place to start for questions of this sort is Jeff Miller's website Earliest Known Uses of Some of the Words of Mathematics. On permutations and combinations one of his sources is Smith, History Of Mathematics Vol II, p.28 (freely available), where we read: "The first work of any extent that is devoted to the subject was Jacques Bernoulli's Ars Conjectandi. This work contains the essential part of the theory of combinations as known today. In it appears in print for the first time, with the present meaning, the word "permutation". For this concept Leibniz had used variationes and Wallis had adopted alternationes. The word "combination" was used in the present sense by both Pascal and Wallis. Leibniz used complexiones for the general term, reserving combinationes for groups of two elements and conternationes for groups of three, words which he generalized by writing con2natio, con3natio, and so on." Miller marks the attribution to Ars Conjectandi (1713) as incorrect and gives two earlier sources for "permutation", based on OED: "In 1678 Thomas Strode, A Short Treatise of the Combinations, Elections, Permutations & Composition of Quantities, has: "By Variations, permutation or changes of the Places of Quantities, I mean, how many several ways any given Number of Quantities may be changed." Lexicon Technicum, or an universal English dictionary of arts and sciences (1710) has: "Variation, or Permutation of Quantities, is the changing any number of given Quantities, with respect to their Places."" So the "inconsistency" was there from the start, as it usually is with ideas from multiple origins, rather than coming from a breakthrough by a single person. In such cases it is the eventual consistency, if it happens, that requires explanation, not the inconsistency. Mathematicians often stick to the use encountered in the native language textbooks, and those tend to emphasize what was introduced by their "own". The calculus notation story on the continent vs Britain is well known, see Was English mathematics behind Europe by many years because of Newton's notation? When people encounter several different words for the same term they tend to introduce some distinctions, and adapt each word to a subclass of the original meaning. And so on. These issues are not settled by good reasons. There are no "correct" terms, and therefore, no wrong adaptations. This is how language is supposed to evolve. By the way, permutation (perestanovka) is also used in Russian, although more typically in algebraic rather than combinatorial contexts. ConifoldConifold $\begingroup$ "The calculus notation story on the continent vs Britain is well known." I am sorry, but I am not sure what you are referring, too. I guess it would help if you added a link or so. 😃 $\endgroup$ $\begingroup$ @rugk Link added. $\endgroup$ – Conifold "...what term is used for "Permutationen mit Wiederholung"/same elements in a set in English?" The combinatorics expressions you're looking for in English are probably 'permutations/combinations with/without replacement' &c, rather than 'with/without repetition'. See this answer on the math stack exchange: https://math.stackexchange.com/questions/474741/formula-for-combinations-with-replacement . Also you might see Sampling With Replacement and Sampling Without Replacement for other usage examples. Of course, as in many other cases of technical translation, it's not guaranteed that the (nearly) equivalent phrases have exactly the same scope. I hope that helps. terry-sterry-s $\begingroup$ Replacement is generally used in contexts where you're talking about probability—e.g., sampling. But repetition is often used in contexts where you're not. For example, here. $\endgroup$ – Peter Shor $\begingroup$ @peter-shor Thanks, that looks like a helpful tech-linguistic point. $\endgroup$ – terry-s $\begingroup$ @PeterShor 'repetition' still sounds 'off' to me. Wiktionary seems wrong here. $\endgroup$ – Mitch $\begingroup$ @Mitch: Wiktionary isn't the only one to use it. See here and here and here and Ngrams. $\endgroup$ $\begingroup$ @PeterShor Oh OK, that makes sense. Replacement with balls and urns but repetition with other things. But an expanded Ngrams search does lean towards doubt of Wiktionary, that is, that it should be 'selection with replacement'. $\endgroup$ This isn't an answer, but I think it's too long for a comment. Sometimes the most literal translation of a technical term doesn't sound quite right in the language you're translating to. For example, Schrödinger used the term verschränken in German to describe particles related by Einstein's spooky-action-as-a distance, and then he used the word entangled in English, because a more literal translation of verschränken didn't have the right feel. So to answer this properly, you might have to go back to the history of the words arrangement in French, k-permutation in English, Variation in German, find out when they were first used and who first used them, see whether they cited any papers in other languages, and try to decide whether a literal translation would have been perceived as incorrect for some reason. For example, I don't think variation would sound right in English, although permutation and arrangement would both work. Peter ShorPeter Shor Thanks for contributing an answer to History of Science and Mathematics Stack Exchange! Not the answer you're looking for? Browse other questions tagged mathematics terminology language combinatorics or ask your own question. Was English mathematics behind Europe by many years because of Newton's notation? What is the first usage of the term "Adjoint" and why was this word chosen? What is the origin of the terminology 'spin up/down'? Meaning of passages by Gauss on the "convergence of expansions (in infinite series) of the (elliptical) equation of the center"? Origin of "bootstrapping" in mathematical logic How is the word kernel associated with distributions? Did Gosper or the Borweins first prove Ramanujans formula?
CommonCrawl
Quantifying the role of surface plasmon excitation and hot carrier transport in plasmonic devices Long-lived modulation of plasmonic absorption by ballistic thermal injection John A. Tomko, Evan L. Runnerstrom, … Patrick E. Hopkins Electrical tuning effect for Schottky barrier and hot-electron harvest in a plasmonic Au/TiO2 nanostructure Zhiguang Sun & Yurui Fang Plasmon-induced hot electron transfer in AgNW@TiO2@AuNPs nanostructures Jiaji Cheng, Yiwen Li, … Marie-Hélène Delville Plasmon-induced hot-hole generation and extraction at nano-heterointerfaces for photocatalysis Monika Ahlawat, Diksha Mittal & Vishal Govind Rao Infrared driven hot electron generation and transfer from non-noble metal plasmonic nanocrystals Dongming Zhou, Xufeng Li, … Haiming Zhu Ultrafast hot-hole injection modifies hot-electron dynamics in Au/p-GaN heterostructures Giulia Tagliabue, Joseph S. DuChene, … Harry A. Atwater Hot-electron emission processes in waveguide-integrated graphene Fatemeh Rezaeifar, Ragib Ahsan, … Rehan Kapadia Plasmonic–perovskite solar cells, light emitters, and sensors Bin Ai, Ziwei Fan & Zi Jing Wong Attosecond-fast internal photoemission Christian Heide, Martin Hauck, … Peter Hommelhoff Giulia Tagliabue1,2, Adam S. Jermyn3, Ravishankar Sundararaman4, Alex J. Welch1,2, Joseph S. DuChene1,2, Ragip Pala1, Artur R. Davoyan1,5,6, Prineha Narang7 & Harry A. Atwater ORCID: orcid.org/0000-0001-9435-02011,2,6 Nature Communications volume 9, Article number: 3394 (2018) Cite this article Photonic devices Harnessing photoexcited "hot" carriers in metallic nanostructures could define a new phase of non-equilibrium optoelectronics for photodetection and photocatalysis. Surface plasmons are considered pivotal for enabling efficient operation of hot carrier devices. Clarifying the fundamental role of plasmon excitation is therefore critical for exploiting their full potential. Here, we measure the internal quantum efficiency in photoexcited gold (Au)–gallium nitride (GaN) Schottky diodes to elucidate and quantify the distinct roles of surface plasmon excitation, hot carrier transport, and carrier injection in device performance. We show that plasmon excitation does not influence the electronic processes occurring within the hot carrier device. Instead, the metal band structure and carrier transport processes dictate the observed hot carrier photocurrent distribution. The excellent agreement with parameter-free calculations indicates that photoexcited electrons generated in ultra-thin Au nanostructures impinge ballistically on the Au–GaN interface, suggesting the possibility for hot carrier collection without substantial energy losses via thermalization. Efficient collection of photoexcited, non-equilibrium "hot" carriers within metallic nanostructures offers considerable promise for band gap-free photodetection and selective photocatalysis1,2. However, practical applications require significant improvements in the performance of hot carrier devices relative to current performance. Excitation of surface plasmon polaritons—hybrid light-matter states localized at a metallic interface—is commonly viewed as a promising pathway for boosting the efficiency of these systems1,2,3,4,5,6,7,8,9,10,11,12,13. Indeed, numerous experimental studies based on internal photoemission14,15,16 (IPE) of hot electrons in metal–semiconductor photodiodes have shown a close correlation between the plasmonic resonance of the nanoantenna and the device responsivity (i.e., light-to-current conversion)12,17,18,19,20,21,22,23. Such close correlation suggested that the dramatically enhanced optical near-fields associated with surface plasmon excitation may alter the quantum efficiency of hot carrier generation and collection7,8,13,18. To date, however, a detailed analysis distinguishing the role of plasmon excitation from hot carrier transport and injection in these systems has remained elusive. The responsivity of a photodetector, or equivalently its external quantum efficiency (EQE), describes the overall efficiency with which the device converts incident photons to collected electrons (Fig. 1a). However, this metric convolutes the effects of plasmonic absorption with the subsequent electronic relaxation and transport processes that occur within the device. While surface plasmons are well known to enhance light absorption24 (Fig. 1b), deeper insight into their fundamental role in the physics of hot carrier devices requires a careful analysis of the internal quantum efficiency (IQE, Fig. 1c), which deconvolutes absorption and transport. Indeed, IQE is an established measure for evaluating interband processes in semiconductor optoelectronics25. Yet, experimental studies of IQE in plasmonic hot carrier IPE systems to date have provided limited understanding of plasmon-mediated hot carrier transport and injection. In particular, previous work has relied on a semi-classical Fowler theory for interpreting the experimental IQE spectra17,21,22,23,26,27,28,29. Failures of this approximation in the visible regime18,23,30, where interband absorption in metals may be dominant, have required making ad hoc assumptions regarding the effect of plasmon excitation in electronic transport processes26,31, in contrast with results of recent ab initio calculations32. Furthermore, a deeper experimental analysis of plasmonic hot carrier transport has so far been obscured by parasitic optical losses present in the plasmonic structures (e.g., from use of adhesion layers and by parasitic hot carrier relaxation and absorption away from the junction in nanostructures thicker than the hot carrier mean free path). Overall, a lack of systematic experimental measurements together with limited model fidelity have prevented a clear assessment of the physics underlying plasmon-derived hot carrier transport and collection. Carrier generation and transport in photoexcited metal nanostructures. a Schematic representation of carrier generation and transport via internal photoemission (IPE) in a plasmonic metal–semiconductor heterostructure: charge carriers created in the metal upon illumination are separated across the metal–semiconductor interface generating a photocurrent at sub-bandgap photon energies. The external quantum efficiency (EQE) spectrum represents the wavelength (λ)-dependent photon-to-electron conversion probability. As show in b and c, the EQE can be decomposed into the product of absorption and internal quantum efficiency (IQE); b Illustrative absorption spectrum of a metal nanostructure displaying a resonant plasmonic feature which can be engineered through photonic design. Plasmon excitation indeed yields high absorption in metallic nanostructures with characteristic dimension L much smaller than the wavelength λ of the incident photon; c Illustrative IQE spectrum and schematic representation of the electronic processes which contribute to it, i.e., generation of carriers through intraband and interband transitions, propagation, and scattering of the hot carriers with energy-dependent mean free path (lmfp), and injection of hot carriers with adequate kinetic energy (Ekin) and momentum (k) across the Schottky barrier, ΦB In this work, we perform an experimental study to elucidate and quantify the role of plasmons in hot carrier devices. We assess the IQE of several hot carrier devices with distinct plasmonic resonances, which were designed to minimize parasitic effects, including optical loss and carrier relaxation. Our studies indicate that transport—as characterized by the IQE—is a distinct and independent process from carrier generation by plasmon excitation. With direct measurements, we deterministically conclude that plasmons solely affect the optical properties of the device without modifying the internal processes associated with hot carrier transport and collection. We also show that the metal electronic band structure and the metal–semiconductor interface influence device performance, particularly at photon energies above the interband absorption threshold. We further provide insight into hot carrier generation, transport, and collection in plasmonic-metal/semiconductor Schottky junctions by coupling spectrally resolved measurements of hot electron collection across Au/n-GaN heterojunctions with a recently developed parameter-free hot carrier transport model33. Going beyond a description of individual electronic processes32,34,35,36,37, this combination of theory and experiment enables an accurate depiction of the complex interplay between hot carrier generation and transport in realistic experimental structures without ad hoc assumptions. In particular, our analysis reveals that the measured photocurrents arise from ballistically injected hot electrons at photon energies below the threshold for interband transitions (~2 eV). Experimental evaluation of IQE and the role of plasmon excitation To experimentally assess the role of plasmon excitation on hot electron device performance, it is necessary to decouple optical excitation from subsequent electronic transport and collection. For this purpose we experimentally compared several Au/GaN photodetector devices with distinct plasmon resonances but identical metal–semiconductor Schottky junctions. An abrupt plasmonic metal/semiconductor interface and plasmonic nanoantennas with thickness smaller than the hot carrier mean free path (lmfp) are necessary to ensure maximal sensitivity to ballistically harvested carriers. Accordingly, our experimental platform consists of planar Au plasmonic photodiodes on an optically transparent yet highly electrically conductive n-type GaN substrate, that we have identified as an optimal support (see Methods) to enable coupled electrical and optical (both transmission and reflection) characterization throughout the entire ultraviolet/visible/near infrared spectral range. Each heterostructure consists of a large Au contact pad connected to an array of electrically conductive Au stripes, which serve as nanoantennas that support plasmon resonances in the Vis-NIR regime. For a fixed period (P) of 230 nm, specifically chosen to suppress diffraction orders in the wavelength (λ) range of interest, the spectral position of the dipolar plasmon mode is controlled by adjusting the stripe width (W). Three hot carrier heterostructures were constructed with W of 61, 70, and 85 nm to achieve plasmon resonances located at ca. 1.9, 1.85, and 1.72 eV. The Au nanoantenna thickness (tAu = 20 nm) approaches the expected average mean free path for hot carriers (ca. 10–20 nm at 2 eV32) and was chosen to maximize the collection of ballistic hot electrons without sacrificing optical absorption. A titanium (Ti) Ohmic contact completes the planar plasmonic diode so that photocurrent can be collected while illuminating the sample through the transparent sapphire substrate (Fig. 2a and Methods). Role of plasmon excitation on hot electron IPE in metal–semiconductor heterostructures. a Schematic representation of the designed plasmonic heterostructures as well as measurement configuration: a 20 nm thick, nano-patterned gold (Au) photoelectrode is fabricated on n-type GaN (3.4 eV band gap, Schottky barrier ΦB ~1.2 eV) together with a 75 nm thick titanium (Ti) Ohmic contact; light is incident on the plasmonic resonant Au nanostripe array (stripe width W, array period P from the bottom and the photocurrent is collected via two microcontact probes); b short-circuit photocurrent Isc (i.e., 0 V applied bias) upon illumination of one heterostructure (W = 61 nm) with a diode laser (λlaser = 633 nm) as a function of incident power; c EQE spectrum of the fabricated heterostructure with stripe width W = 61 nm and periodicity P = 230 nm exhibiting a resonance peak at λpeak = 650 nm; d spatial maps of absorption for illumination of the Au photoelectrode off-resonance (514 nm–2.14 eV) and on-resonance (650 nm–1.9 eV) with light polarized perpendicular to the stripes; e measured (solid line) and simulated (dashed line) absorption spectra for the same heterostructure exhibiting a plasmon resonance at λpeak = 650 nm; f EQE and absorption resonance peak wavelengths (λpeak) for three heterostructures with constant array periodicity (P = 230 nm) and increasing nanostripe width, W, equal to 61 nm (blue), 70 nm (gray), and 85 nm (red), respectively. Representative SEM micrographs are shown on the right (scale bar = 500 nm); g IQE spectra of the three plasmonic heterostructures shown in part (f) The formation of a Schottky barrier (ΦB ~1.2 eV38) at the Au/n-GaN interface ensures that electron-hole pair separation occurs even in the absence of an external bias. As expected, we observed a linear relationship between the short-circuit photocurrent, Isc and incident laser power (Fig. 2b) when using a 633 nm diode laser to irradiate a stripe array (W = 61 nm). We attribute the linear photoresponse to the injection of hot electrons from the Au nanoantennas into the n-GaN conduction band, since the incident photon energy is much less that the bandgap of the semiconductor (Eg = 3.4 eV ~364 nm39). Furthermore, the large barrier for hot hole injection from the metal into the semiconductor valence band (ΦB,Hole > 3 eV) allows us to exclude any potential contribution from hot holes to the device photocurrent in the studied photon energy range. For each heterostructure, steady-state EQE and absorption spectra are determined experimentally by measuring both the wavelength-dependent photocurrent as well as transmission and reflection spectra under the same illumination conditions of tunable, monochromatic light polarized perpendicular to the stripes (see Methods). For the heterostructure with W = 61 nm, a resonance peak at λpeak = 650 nm can be observed in both spectra (Fig. 2c, e), absorption being in excellent agreement with numerical simulations (Fig. 2e, dashed line). Spatial maps of absorption in the photoelectrode were collected off-resonance above the interband threshold of Au (λ = 514 nm < λIB ~688 nm) as well as on-resonance (λpeak = 650 nm). In the first case, the unpatterned Au pad exhibits larger absorption than the array of nanoantennas (Fig. 2d, λ = 514 nm). Instead, on resonance (Fig. 2d, λ = 650 nm), absorption in the plasmonic stripe array (≈60%) greatly exceeds that of the Au film. It is noted that this feature disappears upon rotating the incident light polarization by 90° (Supplementary Notes 1 and 2). Such behavior confirms that the photocurrent originates from optical excitation of the dipolar plasmon mode in the nanoantennas. It is interesting to note that not only the plasmon resonance, but also the fringes present in the absorption spectrum (Fig. 2e), which are due to Fabry–Perot interference40 in the planar GaN/sapphire substrate structure (Supplementary Note 7), cause a modulation in the photocurrent response that is reproduced in the EQE spectrum (Fig. 2c). Comparing the optical (absorption) and electrical (EQE) performance of three hot carrier heterostructures with varying stripe width, we find a close correlation between the plasmon excitation wavelength and the EQE peak response (Fig. 2f). Increasing W from 61 to 85 nm red shifts both absorption and EQE peak positions (λpeak) to a commensurate amount (Supplementary Note 2). In contrast, the IQE spectra, determined by taking the ratio of EQE and absorption (Fig. 2g), do not exhibit any spectral features that are associated with the characteristic peak wavelength for plasmonic absorption in each device (see Supplementary Note 7 regarding the residual Fabry–Perot fringes). The striking similarity of the three IQE curves indicates that the carrier transport and collection processes are the same in all three devices, even though the absorption spectra are different, suggesting that the role of plasmon excitation is primarily associated with optical absorption and not transport. That is, tunable plasmon resonances efficiently couple far-field radiation into nanoscale volumes and this mechanism dominates the EQE across a range of wavelengths. This observation implies that the intrinsic material properties of the metal and the interface barrier height dictate the transport characteristics of the heterostructure. Thus, plasmon excitation does not a priori selectively enhance the rate of any particular decay process or transport mechanism. Interestingly, as also remarked in previous studies18,23,26, we observed that all three IQE curves were characterized by a broad, asymmetric feature peaking around 560–565 nm (~2.2 eV), which cannot be described by conventional Fowler models for IPE. Contrary to previous speculations about the role of indirect bandgap materials18, our results on a direct bandgap semiconductor (n-GaN) indicate that it is the electronic band structure of the metal that determines the energy dependence of the IQE. Ab initio modeling of electronic processes and IQE For hot carriers, IQE is comprised of three distinct processes25 (Fig. 1c): (i) generation of a non-equilibrium distribution of "hot" electrons and holes in the metal nanostructure upon plasmon decay via intraband (sp-sp) and interband (d-sp) optical transitions32; (ii) transport of these hot carriers to an interface either ballistically or via electron–electron and electron–phonon scattering and relaxation32; (iii) injection of carriers with appropriate momenta and sufficient kinetic energy above the interfacial Schottky barrier (ΦB)25. We can relate the specific shape of the IQE curves to the interplay between the two hot carrier generation mechanisms, namely, intraband and interband transitions, as well as their corresponding hot carrier distributions relative to the Schottky barrier height present at the metal–semiconductor interface. The interband and intraband decay rates are determined from density functional theory (DFT) calculations, which generate the prompt hot electron energy distribution. For antennas with sizes of the order of tens of nanometers as in our study, quantization effects of the electronic levels of the metal can be neglected and the bulk properties of gold can be used. Devices employing metallic nanocrystals with dimensions smaller than a couple of nanometers would need to take this aspect into account36. The decay rate is dependent on both incident photon energy and the electronic band structure of the metal32,35. For photon energies below the interband threshold of Au (hνIB ~1.8 eV), hot electrons generated via intraband transitions have a nearly uniform probability at all energies from the Fermi level up to the photon energy (Fig. 3a, solid red curve). As a result, intraband excitation accounts for a sizable fraction of the hot electron distribution at energies above the Schottky barrier height (gray shaded area in Fig. 3a). In this low photon energy regime there is very good agreement between the Fowler model, based on the parabolic band approximation, and full DFT calculations (compare solid red curve with dashed red curve in Fig. 3a). On the other hand, above hνIB (Fig. 3b, solid turquoise curve), a much higher probability distribution is observed for low-energy carriers, since hot electrons originate from d-band levels deep below the Au Fermi level41. Consequently, there is a substantial reduction in the fraction of high-energy electrons created from intraband transitions compared to that predicted for the case of a purely parabolic band (Fig. 3b, dashed turquoise curve). This interplay, combined with the height of our Schottky barrier (gray shaded area in Fig. 3a, b—see also Supplementary Note 3), results in a reduction in IQE at energies above the interband threshold (Fig. 3c, magenta solid line). This is in sharp contrast to predictions of the Fowler model, which accounts exclusively for intraband processes (Fig. 3c, gray dashed line). However, it must be recognized that even above hνIB both types of transitions occur simultaneously and high-energy carriers continue to be generated, though with decreasing probability. Plasmon excitation does not alter this interplay, as it does not directly influence the hot carrier distribution, only the number of photons absorbed by plasmon generation at a given frequency. Changes in the dominant optical transition mechanism with increasing photon energy explains why the metal band structure, and in particular its interband threshold, has such a profound effect on the overall IQE of hot carrier devices. Interestingly, recent DFT calculations32 show that in the case of Al nanoantennas, interband transitions produce a hot electron/hot hole distribution which is very similar to the intraband case and therefore IQE could preserve the quadratic dependence on the photon energy even above the interband threshold (~1.6 eV). Impact of interband and intraband transitions on IQE of hot carrier devices. a Prompt hot electron energy distribution (Pgen) showing the carrier energy E above the Au Fermi level (EF) calculated with DFT (solid line) as well as under the parabolic band approximation (Fowler-like model, dashed line) for incident photon energies (hν) of 1.4 eV and b 2.4 eV. The shaded area in both plots depicts the position of the Schottky barrier, ΦB, limiting the possibility of collection to those carriers with energy E−EF > ΦB. The insets show a schematic of the metal and semiconductor band structure illustrating the predominance of intraband transitions (a) and co-existence with interband transitions (b) as well as the presence of the Schottky barrier at the interface; c IQE spectra calculated based on the Pgen obtained with DFT (magenta solid curve), i.e., including interband transitions, as well as with parabolic band approximation (gray dashed curve), i.e., accounting only for intraband transitions. For the injection process, conservation of tangential momentum is assumed21. Transport of hot electrons within the metal nanostructure has been neglected A microscopic understanding of hot carrier transport in Au/n-GaN heterostructures is obtained by comparing experimental measurements to results of a recently developed theoretical framework that combines electromagnetic simulations, ab initio DFT calculations, and Boltzmann transport methods to compute the generation and transport of hot carriers within realistically scaled (ca. 10–100 nm) metallic structures33 (see Methods). From electromagnetic simulations, we first determine the electric field profile in a single Au nanoantenna (W = 61 nm, Fig. 4a and Supplementary Note 8). The initial energy and momentum distribution of the hot carriers are obtained from plasmon decay rates and electronic optical excitations derived from DFT calculations32,35, which account for the anisotropies associated with these quantities in the interband regime as well as resistive contributions in the intraband regime. Energy-dependent lifetimes and mean free paths (lmfp) are also calculated with ab initio methods accounting for both electron–electron and electron–phonon scattering processes and have been shown previously to agree very well with experimental results34,42. All the calculated quantities are averaged over different crystalline orientations to reflect the polycrystalline nature of the fabricated structures. This information is combined in a Boltzmann transport calculation33 where we compute the propagation of carriers across the Au nanostructure, determining changes to their energy distribution as well as the number of scattering events they experience. For each photon energy, our calculations yield the energy-resolved flux FN(E) of hot electrons with energy E above the metal Fermi level that reach the Au/n-GaN interface after up to N scattering events. Attesting to the validity of our computational approach, the energy-resolved flux of hot electrons that reach the interface ballistically, F0(E), shown in Fig. 4b retains the key features described in Fig. 3a. The model also shows that scattering processes serve to homogenize the hot carrier distributions by smoothing the transition between the intraband and interband generated carriers that reach the interface (Fig. 4c). Hot electron generation and transport in plasmonic nanoantennas. a Calculated spatial profile of the electric field norm |Efield| at resonance (λ = 650 nm, hν = 1.9 eV, E0 = 2.56 × 105 V m−1) for the experimental structure with W = 61 nm and P = 230 nm. |Efield| in the metal defines the spatial generation profile of the hot carriers. As schematically illustrated, hot electrons then propagate across the metal structure and reach the Au–GaN interface either ballistically (solid arrow) or after scattering (dashed arrows); b energy-resolved flux of hot electrons reaching the Au–GaN interface ballistically for photon energy of 1.7 eV (orange curves, weak interband contribution) and 2.4 eV (turquoise curves, strong interband contribution). The shaded area shows the position of the Schottky barrier; c same as b but including the flux of carriers that have undergone up to N = 3 scattering events; d IQE spectra calculated based on the computed energy-resolved fluxes, both for the ballistic case (blue solid curve) and for N = 3 (blue dashed curve), under the assumption of tangential momentum conservation for the injection probability21. The gray dashed curve represents the IQE estimated based on the fit of Fowler yield, IQEFowler = C·(hν−ΦB)2/hν with ΦB ~1.2 eV and C = 6.7 × 10−5. The gray solid curve is the experimentally determined IQE (Fig. 2g, blue curve) Estimating the injection probability, Pinj(E) across the Schottky barrier based on the assumption of tangential momentum conservation (Supplementary Note 4)21, we then calculate IQE as \( {{\Phi }_{\mathrm{B}}}\): $${\mathrm{IQE}} = \mathop {\smallint }\limits_{{\Phi }_{\mathrm{B}}}^\infty F_{\mathrm{N}}(E) \cdot P_{{\mathrm{inj}}}\left( E \right)\mathrm{d}E\,{\mathrm{for}}\,N = 0,1, \ldots$$ The blue solid curve in Fig. 4d represents the IQE spectrum predicted from F0(E) and the blue dashed curve is the predicted IQE obtained for F3(E). Including additional scattering events only changes the IQE by 0.01%, indicating that the vast majority of hot electrons undergo no more than three scattering events before being collected. Significantly, our parameter-free model of hot carrier generation, transport, and injection is in excellent quantitative agreement with the experimental data (gray solid curve). This result shows that a detailed description of material properties and device geometry can precisely capture the details of plasmonic hot carrier transport under illumination, both on and off resonance. Strikingly, the results of our model indicate that more than 90% of the hot carriers are collected ballistically at photon energies below 2 eV (λ > 620 nm), implying that hot carrier transport in our Au nanoantennas occurs in the ballistic regime at the plasmon peak position. This result retrospectively validates the tailored design of our experimental platform toward ballistic hot carrier collection. Increasing the thickness of the plasmonic antenna would increase the contribution of scattered carriers to the observed photocurrent, due to the increased distance that hot electrons must travel before reaching the interface. Since each electron–electron scattering event approximately reduces the electron energy by a factor of two, it is expected that scattered carriers would only provide a significant contribution at higher photon energies; those carriers created at lower photon energies would likely have insufficient energy to overcome the Schottky barrier. Nonetheless, in the studied configuration, which is very common for plasmonic photodetectors, the plasmonic antenna sits on a high-index GaN substrate, and therefore the electric field is localized close to the metal–semiconductor interface upon excitation of the fundamental plasmon mode (Fig. 4a). The largest hot carrier generation thus occurs close to the interface, and as a result, the non-uniform field profile inside the antenna favors ballistic collection, mitigating the effect of increasing antenna thickness. Therefore, by enabling strong light localization in metallic nanostructures (Fig. 4a) plasmon excitation may be able to realize optoelectronic systems that operate in the truly ballistic regime despite the short, energy-dependent lmfp of hot electrons in metals. We also observe that our experimental IQE values agree quantitatively with theoretical results based on metal electronic structure, suggesting that the collection efficiency is limited by fundamental electronic structures characteristics of the metal and interface. To summarize, the key aspects influencing IQE are: (i) the metal band structure, (ii) the transport processes to the interface, (iii) the Schottky barrier height, and (iv) the momentum matching condition for injection across the interface. We note that the momentum matching factor has profound consequences for the overall magnitude of the IQE. Indeed, the low effective electron mass in GaN43 and the smooth metal–semiconductor interface in our devices, which imposes tangential momentum conservation, account for a reduction in IQE by nearly four orders of magnitude (Supplementary Note 5). Use of semiconductors with heavy electrons or large density of states in the conduction band (e.g., TiO2) as well as nanoscopically roughened metal–semiconductor interfaces could thus be beneficial to boost the IQE and performance of hot carriers IPE devices. Irrespective of the Schottky barrier height, momentum matching conditions also cause a disproportionate suppression in the collection of low-energy electrons originating from either interband transitions or scattering of high-energy carriers generated by intraband transitions. Therefore the metal–semiconductor interface plays a significant role in the ultimate efficiency of plasmonic hot carrier IPE devices. We also note here that plasmon-mediated interfacial hot carrier excitation has been observed in selected systems employing small metallic nanocrystals and constitutes a different mechanism for harnessing hot carriers beyond IPE44,45. In fact, in the case of interfacial plasmon excitation the quantum efficiency has been shown to exhibit a stepwise efficiency spectrum with a system-specific threshold energy44. However, in the studied systems, which have dimensions of several tens of nanometers, we can entirely ascribe the IQE spectral features to the metal properties and we do not observe any deviations that could be attributed to a competing contribution from interfacial plasmon excitation. Transport of carriers from their point of generation to the interface, where they are filtered by the presence of a sizeable Schottky barrier (ΦB ~1.2 eV), accounts for the remaining one to two orders of magnitude reduction in IQE. It is worth noting that even assuming a 50 meV Schottky barrier, values of IQE ~10-4 are expected for this system (Supplementary Note 5). Considering these factors, we suggest that a potentially promising strategy for increasing the IQE value is to identify metals with a high density of states close to the Fermi level, which would enable the efficient creation of hot electrons with high energies and offer an interesting path toward high-performance hot carrier devices. Simultaneously, careful design of the device geometry19,30,46 and further engineering of the spatial hot carrier generation profile could promote ballistic collection, and hence improve device efficiency. To summarize, our experimental analysis of IQE in ultrathin plasmonic nanoantennas with abrupt metal/semiconductor interfaces reveals that plasmon excitation enables the efficient coupling of far-field radiation into nanoscale volumes, but does not dictate the transport physics governing the performance of hot carrier photoemission devices. Instead, analysis of the IQE spectra emphasizes the role of interband and intraband decay processes, as well as carrier transport over nanometer scale distance in the metal, in determining the distribution of hot carriers that are collected via IPE. Our observation of ballistic electrons is encouraging for efforts to use ballistic hot carrier collection for ultrafast photodetection and excited-state photocatalysis. Our results reveal mechanisms important to the design of efficient hot carrier devices, and they suggest that new materials with tailored band structure and transport properties will be crucial for the realization of efficient hot carrier-driven devices. Future experiments using ultrafast spectroscopy techniques and time-resolved IQE measurements may expand our understanding of hot carrier transport, and allow for more comprehensive comparison with theoretical predictions. As an outlook, the agreement between our experimental data and detailed, parameter-free theoretical hot carrier transport model suggests that this combined approach can be a powerful tool to guide the design of future hot carrier optoelectronic devices. Sample fabrication In order to perform coupled optical and electrical measurements of a plasmonic IPE device for experimental assessment of its IQE, it is necessary to have a semiconducting substrate which: (i) does not absorb light in the wavelength range of interest in order to prevent interband photogeneration of carriers within the semiconductor and also does not scatter light (optically transparent); (ii) has high electrical conductivity to enable transport of hot carriers; and (iii) forms a Schottky barrier with the metal to favor separation of the hot electrons and holes and to prevent their recombination. The n-GaN substrate employed here satisfies all of these requirements: (i) it has a wide bandgap (3.4 eV), and is optically transparent, with no light scattering centers; (ii) due to its widespread use in optoelectronics, it is commercially available with various doping levels, in the form of highly doped low electrical resistance, crystalline substrates; (iii) its band alignment leads to the formation of a sizable Schottky barrier of ~1.2 eV with Au. These factors motivated our use of GaN as a semiconducting support for the study of plasmonic IPE devices. GaN films on sapphire were purchased from Xiamen (4 ± 1 μm thick GaN layer, Ga-face, epi-ready, Nd = 5–7 × 1017 cm−3, ρ < 0.5 Ω cm, <108 dislocations/cm2). A layer of S1813 was spin coated on the substrate (40 s, 3000 rpm) and post-baked for 2 min at 115 °C. The Ohmic pattern was exposed for 40 s and then developed for 10 s in MF319®. Then, 75 nm of Ti were deposited with e-beam evaporation (1.5 Å/s, base pressure lower than 5 × 10−7 Torr). A layer of PMMA 495-A4 was spin coated on the sample (1 min, 4000 rpm) and baked for 2 min at 180 °C. Next, a layer of PMMA 950-A2 was spin coated on top of it (1 min, 5000 rpm) and also baked for 2 min at 180 °C. Then, e-beam lithography was used to write the nanoantenna pattern (Quanta FEI, NPGS System). Beam currents of approximately 40 pA were used with exposures ranging from 350 to 500 μC/cm2, thus achieving different stripe widths with equal pitch. A 20 nm Au layer was then deposited with e-beam evaporation (Lesker) (0.8 Å/s, base pressure lower than 2 × 10−7 Torr). Importantly, before any metal deposition, the sample was exposed to a mild oxygen plasma (30 s, 200 W, 300 mT) to remove any photoresist residual, dipped in a 1:15 NH4OH:DI H2O solution for 30 s to remove any surface oxide layer and finally rinsed in water (30 s) and blown dry with N2 gas. The substrate was then immediately loaded into the e-beam evaporator chamber, minimizing the time of exposure to ambient atmosphere. Photocurrent measurements A Fianium laser (2 W) was used as the light source for plasmon excitation. The beam was monochromated (slit width 200 μm), collimated, and finally focused onto the sample with a long working distance, low NA objective (Mitutoyo 5×, NA = 0.14). A Si photodetector was used to measure the transmitted power or, using a beam splitter, the reflected power incident on the sample. A silver mirror (M, Thorlabs) was used to normalize the reflection and the background (BG) was subtracted from all the measurements. A tilted glass slide was used to deflect a small amount of incident power from the laser onto a reference photodiode for coincident recording of the laser power incident on the sample. A chopper, typically at a frequency of ~100 Hz, was used to modulate the incident power and thus the photocurrent signal, which was subsequently processed with a lock-in amplifier. An external, low-noise current-to-voltage amplifier was used to feed the signal to the lock-in. Piezoelectric micro-probes (Mibots®) are utilized to electrically contact the sample and perform all of the photocurrent measurements. Numerical simulations A commercial finite element method software (COMSOL) is used to perform the electromagnetic simulations. The 3D simulations are performed to estimate absolute absorption values as well as 3D internal electric field distributions to be used in the subsequent hot carrier generation and transport code. The scattered field formulation is utilized. For the background field calculation, a port boundary condition with excitation "ON" is used to launch a plane wave with normal incidence and variable wavelength as well as for the recording of the reflected wave. A second port boundary condition without excitation is used to record the transmitted wave. Perfect magnetic conductor and periodic boundary conditions are used on the side walls (width of the unit cell equal to the array pitch, P = 230 nm, length of the cell equal to 50 nm). For the calculation of the scattered field, perfect-matched layers are used in place of the port boundary conditions. Hot carrier generation and transport predictions The hot carrier flux is computed by iteratively evaluating the effects of transport and scattering. In each iteration, transport effects are computed using the 1D Green's function (exp(−x/lmfp), where lmfp is the mean free path) on a tetrahedral mesh. Multiple different directions are integrated via Monte Carlo sampling. This results in a deposition of transported carriers at the surface and scattered carriers in the interior. The scattered carriers are then transformed via the scattering matrix elements to produce a new energy distribution at each point in the mesh, which is used as the input to the next round of transport calculations. The initial input distribution is obtained using the carrier energy-resolved dielectric function Imε(ω, E) and the input electromagnetic field from COMSOL, evaluated on the same tetrahedral mesh. Imε(ω, E) and the energy-dependent mean free path lmfp(E) are obtained using Fermi's golden rule, with electron–phonon and electron–photon matrix elements calculated using the DFT software JDFTx47 (see ref. 33 for further details). First principle methodologies available through open-source software, JDFTx, and post-processing scripts available from authors upon request. All relevant data are available from the authors upon request. This Article was originally published without the accompanying Peer Review File. This file is now available in the HTML version of the Article; the PDF was correct from the time of publication. Narang, P., Sundararaman, R. & Atwater, H. A. Plasmonic hot carrier dynamics in solid-state and chemical systems for energy conversion. Nanophotonics 5, 96–111 (2016). Brongersma, M. L., Halas, N. J. & Nordlander, P. Plasmon-induced hot carrier science and technology. Nat. Nanotechnol. 10, 25–34 (2015). Article ADS PubMed CAS Google Scholar Li, W. & Valentine Jason, G. Harvesting the loss: surface plasmon- based hot electron photodetection. Nanophotonics 6, 177 (2017). Tian, Y. & Tatsuma, T. Plasmon-induced photoelectrochemistry at metal nanoparticles supported on nanoporous TiO2. Chem. Commun. 0, 1810–1811 (2004). Christopher, P., Xin, H. L. & Linic, S. Visible-light-enhanced catalytic oxidation reactions on plasmonic silver nanostructures. Nat. Chem. 3, 467–472 (2011). Clavero, C. Plasmon-induced hot-electron generation at nanoparticle/metal-oxide interfaces for photovoltaic and photocatalytic devices. Nat. Photonics 8, 95–103 (2014). Linic, S., Aslam, U., Boerigter, C. & Morabito, M. Photochemical transformations on plasmonic metal nanoparticles. Nat. Mater. 14, 567–576 (2015). Christopher, P. & Moskovits, M. Hot charge carrier transmission from plasmonic nanostructures. Annu. Rev. Phys. Chem. 68, 379–398 (2017). Cortes, E. et al. Plasmonic hot electron transport drives nano-localized chemistry. Nat. Commun. 8, 14880 (2017). Article ADS PubMed PubMed Central CAS Google Scholar Robatjazi, H., Bahauddin, S. M., Doiron, C. & Thomann, I. Direct plasmon-driven photoelectrocatalysis. Nano Lett. 15, 6155–6161 (2015). Schather, A. E. et al. Hot hole photoelectrochemistry on Au@SiO2@Au nanoparticles. J. Phys. Chem. Lett. 8, 2060–2067 (2017). Chen, Z. H. et al. Vertically aligned ZnO nanorod arrays sentisized with gold nanoparticles for schottky barrier photovoltaic cells. J. Phys. Chem. C 113, 13433–13437 (2009). Sykes, M. E. et al. Enhanced generation and anisotropic Coulomb scattering of hot electrons in an ultra-broadband plasmonic nanopatch metasurface. Nat. Commun. 8, 986 (2017). Fowler, R. H. The analysis of photoelectric sensitivity curves for clean metals at various temperatures. Phys. Rev. 38, 45–56 (1931). Article ADS MATH CAS Google Scholar Padovani, F. A. & Stratton, R. Field and thermionic-field emission in Schottky barriers. Solid-State Electron. 9, 695–707 (1966). Helman, J. S. & Sánchez-Sinencio, F. Theory of internal photoemission. Phys. Rev. B 7, 3702–3706 (1973). Knight, M. W., Sobhani, H., Nordlander, P. & Halas, N. J. Photodetection with active optical antennas. Science 332, 702–704 (2011). Zheng, B. Y. et al. Distinguishing between plasmon-induced and photoexcited carriers in a device geometry. Nat. Commun. 6, 7797 (2015). Li, W. & Valentine, J. Metamaterial perfect absorber based hot electron photodetection. Nano Lett. 14, 3510–3514 (2014). Li, W. et al. Circularly polarized light detection with hot electrons in chiral plasmonic metamaterials. Nat. Commun. 6, 8379 (2015). Chalabi, H., Schoen, D. & Brongersma, M. L. Hot-electron photodetection with a plasmonic nanostripe antenna. Nano Lett. 14, 1374–1380 (2014). Ng, C. et al. Hot carrier extraction with plasmonic broadband absorbers. ACS Nano 10, 4704–4711 (2016). Fang, Y. et al. Plasmon enhanced internal photoemission in antenna-spacer-mirror based Au/TiO2 nanostructures. Nano Lett. 15, 4059–4065 (2015). Atwater, H. A. & Polman, A. Plasmonics for improved photovoltaic devices. Nat. Mater. 9, 205–213 (2010). Sze, S. M. Physics of Semiconductor Devices, 3rd edn (Wiley, Hoboken, NJ, 2006). Li, X.-H., Chou, J. B., Kwan, W. L., Elsharif, A. M. & Kim, S.-G. Effect of anisotropic electron momentum distribution of surface plasmon on internal photoemission of a Schottky hot carrier device. Opt. Express 25, A264–A273 (2017). Leenheer, A. J., Narang, P., Lewis, N. S. & Atwater, H. A. Solar energy conversion via hot electron internal photoemission in metallic nanostructures: efficiency estimates. J. Appl. Phys. 115, 134301 (2014). White, T. P. & Catchpole, K. R. Plasmon-enhanced internal photoemission for photovoltaics: theoretical efficiency limits. Appl. Phys. Lett. 101, 073905 (2012). Zhang, Y., Yam, C. Y. & Schatz, G. C. Fundamental limitations to plasmonic hot-carrier solar cells. J. Phys. Chem. Lett. 7, 1852–1858 (2016). Knight, M. W. et al. Embedding plasmonic nanostructure diodes enhances hot electron emission. Nano Lett. 13, 1687–1692 (2013). Uskov, A. V. et al. Internal photoemission from plasmonic nanoparticles: comparison between surface and volume photoelectric effects. Nanoscale 6, 4716–4727 (2014). Brown, A. M., Sundararaman, R., Narang, P., Goddard, W. A. & Atwater, H. A. Nonradiative plasmon decay and hot carrier dynamics: effects of phonons, surfaces, and geometry. ACS Nano 10, 957–966 (2016). Jermyn, A. S. et al. Far-from-equilibrium transport of excited carriers in nanostructures. Preprint at https://arxiv.org/abs/1707.07060 (2017). Brown, A. M. et al. Experimental and ab initio ultrafast carrier dynamics in plasmonic nanoparticles. Phys. Rev. Lett. 118, 087401 (2017). Article ADS PubMed Google Scholar Sundararaman, R., Narang, P., Jermyn, A. S., Goddard, W. A. & Atwater, H. A. Theoretical predictions for hot-carrier generation from surface plasmon decay. Nat. Commun. 5, 5788 (2014). Zhang, H. & Govorov, A. O. Optical generation of hot plasmonic carriers in metal nanocrystals: the effects of shape and field enhancement. J. Phys. Chem. C 118, 7606–7614 (2014). Manjavacas, A., Liu, J. G., Kulkarni, V. & Nordlander, P. Plasmon-induced hot carriers in metallic nanoparticles. ACS Nano 8, 7630–7638 (2014). William, M. H. Handbook of Chemistry and Physics 97th edn (CRC Press, Boca Raton, FL, 2016). Madelung, O. Semiconductor: Group IV Elements and III-V Compound (Verlag, Berlin, 1991). Perot, A. & Charles, F. On the application of interference phenomena to the solution of various problems of spectroscopy and metrology. Astrophys. J. 9, 87–115 (1899). Sa, J. et al. Direct observation of charge separation on Au localized surface plasmons. Energ. Environ. Sci. 6, 3584–3588 (2013). Brown, A. M., Sundararaman, R., Narang, P., Goddard, W. A. & Atwater, H. A. Ab initio phonon coupling and optical response of hot electrons in plasmonic metals. Phys. Rev. B 94, 075120 (2016). Bougrov, V., Levinshtein, M.E., Rumyantsev, S.L. & Zubrilov, A. in Properties of Advanced Semiconductor Materials GaN, AlN, InN, BN, SiC, SiGe (eds Levinshtein, M.E., Rumyantsev, S.L. & Shur, M.S.) 1–30 (John Wiley & Sons, Inc., New York, 2001). Wu, K., Chen, J., McBride, J. R. & Lian, T. Efficient hot-electron transfer by a plasmon-induced interfacial charge-transfer transition. Science 349, 632–635 (2015). Tan, S. et al. Plasmonic coupling at a metal/semiconductor interface. Nat. Photonics 11, 806–812 (2017). Ratchford, D. C., Dunkelberger, A. D., Vurgaftman, I., Owrutsky, J. C. & Pehrsson, P. E. Quantification of efficient plasmonic hot-electron injection in gold nanoparticle TiO2 films. Nano Lett. 17, 6047–6055 (2017). Sundararaman, R. et al. JDFTx: software for joint density-functional theory. SoftwareX 6, 278–284 (2017). This material is based on work performed by the Joint Center for Artificial Photosynthesis, a DOE Energy Innovation Hub, supported through the Office of Science of the U.S. Department of Energy under Award No. DE-SC0004993. R.S., A.S.J., and P.N. acknowledge support from NG NEXT at Northrop Grumman Corporation. Calculations in this work used the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02–05CH11231. A.D. and H.A.A. acknowledge support from the Air Force Office of Scientific Research under grant FA9550-16-1-0019. G.T. acknowledges support from the Swiss National Science Foundation through the Early Postdoc Mobility Fellowship, grant no. P2EZP2_159101. P.N. acknowledges support from the Harvard University Center for the Environment (HUCE). A.S.J. thanks the UK Marshall Commission and the US Goldwater Scholarship for financial support. A.J.W. acknowledges support from the National Science Foundation (NSF) under Award No. 2016217021. Thomas J. Watson Laboratories of Applied Physics, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA, 91125, USA Giulia Tagliabue, Alex J. Welch, Joseph S. DuChene, Ragip Pala, Artur R. Davoyan & Harry A. Atwater Joint Center for Artificial Photosynthesis, California Institute of Technology, 1200 East California Boulevard, Pasadena, CA, 91125, USA Giulia Tagliabue, Alex J. Welch, Joseph S. DuChene & Harry A. Atwater Institute of Astronomy, Cambridge University, Cambridge, CB3 0HA, UK Adam S. Jermyn Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, 110 8th Street, Troy, NY, 12180, USA Ravishankar Sundararaman Resnick Sustainability Institute, California Institute of Technology, Pasadena, CA, 91125, USA Artur R. Davoyan Kavli Nanoscience Institute, California Institute of Technology, Pasadena, CA, 91125, USA Artur R. Davoyan & Harry A. Atwater John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, 02138, USA Prineha Narang Giulia Tagliabue Alex J. Welch Joseph S. DuChene Ragip Pala Harry A. Atwater G.T. performed experiments, numerical simulations, and IQE calculations of devices. A.S.J., R.S., and P.N. performed ab initio hot carrier generation and transport calculations. A.J.W., J.S.D., R.P., and A.R.D. contributed to experiments and data analysis. All authors contributed to interpretation of the results. G.T., J.S.D., A.R.D., and H.A.A. wrote the manuscript with contributions from all authors. H.A.A. supervised all aspects of the project. Correspondence to Harry A. Atwater. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Peer Review File Tagliabue, G., Jermyn, A.S., Sundararaman, R. et al. Quantifying the role of surface plasmon excitation and hot carrier transport in plasmonic devices. Nat Commun 9, 3394 (2018). https://doi.org/10.1038/s41467-018-05968-x Simulation and sensitivity analysis of a plasmonic FET based sensor in visible spectral range under different design conditions Bharathi Raj Muthu Ewins Pon Pushpa Anuj K. Sharma High performance of hot-carrier generation, transport and injection in TiN/TiO2 junction Tingting Liu Qianjun Wang Jun Hu Frontiers of Physics (2022) Flow and extraction of energy and charge carriers in hybrid plasmonic nanostructures Suljo Linic Steven Chavez Rachel Elias Nature Materials (2021) Simple experimental procedures to distinguish photothermal from hot-carrier processes in plasmonics Guillaume Baffou Ivan Bordacchini Romain Quidant Light: Science & Applications (2020) Nature Communications (Nat Commun) ISSN 2041-1723 (online)
CommonCrawl
Mp00058: Perfect matchings —to permutation⟶ Permutations Mp00235: Permutations —descent views to invisible inversion bottoms⟶ Permutations Mp00061: Permutations —to increasing tree⟶ Binary trees [(1,2)]=>[2,1]=>[2,1]=>[[.,.],.] [(1,2),(3,4)]=>[2,1,4,3]=>[2,1,4,3]=>[[.,.],[[.,.],.]] [(1,3),(2,4)]=>[3,4,1,2]=>[4,1,3,2]=>[[.,.],[[.,.],.]] [(1,4),(2,3)]=>[4,3,2,1]=>[2,3,4,1]=>[[.,[.,[.,.]]],.] [(1,2),(3,4),(5,6)]=>[2,1,4,3,6,5]=>[2,1,4,3,6,5]=>[[.,.],[[.,.],[[.,.],.]]] [(1,3),(2,4),(5,6)]=>[3,4,1,2,6,5]=>[4,1,3,2,6,5]=>[[.,.],[[.,.],[[.,.],.]]] [(1,4),(2,3),(5,6)]=>[4,3,2,1,6,5]=>[2,3,4,1,6,5]=>[[.,[.,[.,.]]],[[.,.],.]] [(1,5),(2,3),(4,6)]=>[5,3,2,6,1,4]=>[6,3,5,1,2,4]=>[[[.,.],[.,.]],[.,[.,.]]] [(1,6),(2,3),(4,5)]=>[6,3,2,5,4,1]=>[4,3,5,6,1,2]=>[[[.,.],[.,[.,.]]],[.,.]] [(1,6),(2,4),(3,5)]=>[6,4,5,2,3,1]=>[3,5,2,6,4,1]=>[[[.,[.,.]],[[.,.],.]],.] [(1,5),(2,4),(3,6)]=>[5,4,6,2,1,3]=>[2,6,1,5,4,3]=>[[.,[.,.]],[[[.,.],.],.]] [(1,4),(2,5),(3,6)]=>[4,5,6,1,2,3]=>[6,1,2,4,5,3]=>[[.,.],[.,[[.,[.,.]],.]]] [(1,3),(2,5),(4,6)]=>[3,5,1,6,2,4]=>[5,6,3,2,1,4]=>[[[[.,[.,.]],.],.],[.,.]] [(1,2),(3,5),(4,6)]=>[2,1,5,6,3,4]=>[2,1,6,3,5,4]=>[[.,.],[[.,.],[[.,.],.]]] [(1,2),(3,6),(4,5)]=>[2,1,6,5,4,3]=>[2,1,4,5,6,3]=>[[.,.],[[.,[.,[.,.]]],.]] [(1,3),(2,6),(4,5)]=>[3,6,1,5,4,2]=>[6,4,3,5,1,2]=>[[[[.,.],.],[.,.]],[.,.]] [(1,4),(2,6),(3,5)]=>[4,6,5,1,3,2]=>[5,3,1,4,6,2]=>[[[.,.],.],[[.,[.,.]],.]] [(1,5),(2,6),(3,4)]=>[5,6,4,3,1,2]=>[3,1,4,6,5,2]=>[[.,.],[[.,[[.,.],.]],.]] [(1,6),(2,5),(3,4)]=>[6,5,4,3,2,1]=>[2,3,4,5,6,1]=>[[.,[.,[.,[.,[.,.]]]]],.] [(1,2),(3,4),(5,6),(7,8)]=>[2,1,4,3,6,5,8,7]=>[2,1,4,3,6,5,8,7]=>[[.,.],[[.,.],[[.,.],[[.,.],.]]]] [(1,4),(2,3),(5,8),(6,7)]=>[4,3,2,1,8,7,6,5]=>[2,3,4,1,6,7,8,5]=>[[.,[.,[.,.]]],[[.,[.,[.,.]]],.]] to permutation Returns the fixed point free involution whose transpositions are the pairs in the perfect matching. descent views to invisible inversion bottoms Return a permutation whose multiset of invisible inversion bottoms is the multiset of descent views of the given permutation. An invisible inversion of a permutation $\sigma$ is a pair $i < j$ such that $i < \sigma(j) < \sigma(i)$. The element $\sigma(j)$ is then an invisible inversion bottom. A descent view in a permutation $\pi$ is an element $\pi(j)$ such that $\pi(i+1) < \pi(j) < \pi(i)$, and additionally the smallest element in the decreasing run containing $\pi(i)$ is smaller than the smallest element in the decreasing run containing $\pi(j)$. This map is a bijection $\chi:\mathfrak S_n \to \mathfrak S_n$, such that the multiset of descent views in $\pi$ is the multiset of invisible inversion bottoms in $\chi(\pi)$, the set of left-to-right maximima of $\pi$ is the set of maximal elements in the cycles of $\chi(\pi)$, the set of global ascent of $\pi$ is the set of global ascent of $\chi(\pi)$, the set of maximal elements in the decreasing runs of $\pi$ is the set of deficiency positions of $\chi(\pi)$, and the set of minimal elements in the decreasing runs of $\pi$ is the set of deficiency values of $\chi(\pi)$. to increasing tree Sends a permutation to its associated increasing tree. This tree is recursively obtained by sending the unique permutation of length $0$ to the empty tree, and sending a permutation $\sigma$ of length $n \geq 1$ to a root node with two subtrees $L$ and $R$ by splitting $\sigma$ at the index $\sigma^{-1}(1)$, normalizing both sides again to permutations and sending the permutations on the left and on the right of $\sigma^{-1}(1)$ to the trees $L$ and $R$, respectively.
CommonCrawl
Feature Column Archive What is a prime, and who decides? By: cgibbons In: 2021, Algebra and Number Theory, Courtney Gibbons Tagged: partitions, primes Some people view mathematics as a purely platonic realm of ideas independent of the humans who dream about those ideas. If that's true, why can't we agree on the definition of something as universal as a prime number? Courtney R. Gibbons Scene: It's a dark and stormy night at SETI. You're sitting alone, listening to static on the headphones, when all of the sudden you hear something: two distinct pulses in the static. Now three. Now five. Then seven, eleven, thirteen — it's the sequence of prime numbers! A sequence unlikely to be generated by any astrophysical phenomenon (at least, so says Carl Sagan in Contact, the novel from which I've lifted this scene) — in short, proof of alien intelligence via the most fundamental mathematical objects in the universe… bzz bzz, bzz bzz bzz, bzz bzz bzz bzz bzz, … Hi! I'm Courtney, and I'm new to this column. I've been enjoying reading my counterparts' posts, including Joe Malkevitch's column Decomposition and David Austin's column Meet Me Up in Space. I'd like to riff on those columns a bit, both to get to some fun algebra (atoms and ideals!) and to poke at the idea that math is independent of our humanity. Introduction, Take 2 Scene: It's a dark and stormy afternoon in Clinton, NY. I'm sitting alone at my desk with two undergraduate abstract algebra books in front of me, both propped open to their definitions of a prime number… Book A says that an integer $p$ (of absolute value at least 2) is prime provided it has exactly two positive integer factors. Otherwise, Book A says $p$ is composite. Book B says that an integer $p$ (of absolute value at least 2) is prime provided whenever it divides a product of integers, it divides one of the factors (in any possible factorization). Otherwise, Book B says $p$ is composite. Note: Book A is Bob Redfield's Abstract Algebra: A Concrete Introduction (Bob is my predecessor at Hamilton College). Book B Abstract Algebra: Rings, Groups, and Fields by Marlow Anderson (Colorado College; Marlow was my undergraduate algebra professor) and Todd Feil (Denison University). I reached for the nearest algebra textbook to use as a tie-breaker, which happened to be Dummit and Foote's Abstract Algebra, only to find that the authors hedge their bets by providing Book A's definition and then saying, well, actually, Book B's definition can be used to define prime, actually. Yes, it's a nice exercise to show these definitions are equivalent. I can't help but wonder, though: which is what it really is to be prime, and which is merely a consequence of that definition? Some folks take the view that math is a true and beautiful thing and we humans merely discover it. This seems to me to be a way of saying that math is independent of our humanity. Who we are, what communities we belong to — these don't have any effect on Mathematics, Platonic Realm of Pure Ideas. To quantify this position as one might for an intro to proofs class: For each mathematical idea $x$, $x$ has a truth independent of humanity. And yet, two textbooks fundamental to the undergraduate math curriculum are sitting here on my desk with the audacity to disagree about the very definition of arguably the most pure, most platonic, most absolutely mathematical phenomenon you could hope to encounter: prime numbers! This isn't a perfect counterexample to the universally quantified statement above (maybe one of these books is wrong?). But in my informal survey of undergraduate algebra textbooks (the librarians at Hamilton really love me and the havoc I wreak in the stacks!), there's not exactly a consensus on the definition of a prime! On the left, books with definitions that agree with Book A. On the right, books with definitions that agree with Book B. On top, a Polish mood cube showing dejection in Springer yellow. Not pictured, books that fell on the floor while surveying my office shelves. As far as I can tell, the only consensus is that we shouldn't consider $-1$, $0$, or $1$ to be prime numbers. But, uh, why not?! In the case of $0$, it breaks both definitions. You can't divide by zero (footnote: well, you shouldn't divide by if you want numbers to be meaningful, which is, of course, a decision that someone made and that we continue to make when we assert "you can't divide by zero"), and zero has infinitely many positive integer factors. But when $\pm 1$ divides a product, it divides one (all!) of the factors. And what's so special about exactly two positive divisors anyway? Why not "at most two" positive divisors? Well, if you're reading this, you probably have had a course in algebra, and so you know (or can be easily persuaded, I hope!) that the integers have a natural (what's natural is a matter of opinion, of course) algebraic analog in a ring of polynomials in a single variable with coefficients from a field $F$. The resemblance is so strong, algebraically, that we call $F[x]$ an integral domain ("a place where things are like integers" is my personal translation). The idea of prime, or "un-break-down-able", comes back in the realm of polynomials, and Book A and Book B provide definitions as follow: Book B says that a nonconstant polynomial $p(x)$ is irreducible provided the only way it factors is into a product in which one of the factors must have degree 0 (and the other necessarily has the same degree as $p(x)$). Otherwise, Book B says $p(x)$ is reducible. Book A says that a nonconstant polynomial $p(x)$ is irreducible provided whenever $p(x)$ divides a product of polynomials in $F[x]$, it divides one of the factors. Otherwise, Book A says $p(x)$ is reducible. Both books agree, however, that a polynomial is reducible if and only if it has a factorization that includes more than one irreducible factor (and thus a polynomial cannot be both reducible and irreducible). Notice here that we have a similar restriction: the zero polynomial is excluded from the reducible/irreducible conversation, just as the integer 0 was excluded from the prime/composite conversation. But what about the other constant polynomials? They satisfy both definitions aside from the seemingly artificial caveat that they're not allowed to be irreducible! Well, folks, it turns out that in the integers and in $F[x]$, if you're hoping to have meaningful theorems (like the Fundamental Theorem of Arithmetic or an analog for polynomials, both of which say that factorization into primes/irreducibles is unique up to a mild condition), you don't want to allow things with multiplicative inverses to be among your un-break-down-ables! We call elements with multiplicative inverses units, and in the integers, $(-1)\cdot(-1) = 1$ and $1\cdot 1 = 1$, so both $-1$ and $1$ are units (they're the only units in the integers). In the integers, we want $6$ to factor uniquely into $2\cdot 3$, or, perhaps (if we're being generous and allowing negative numbers to be prime, too) into $(-2)\cdot(-3)$. This generosity is pretty mild: $2$ and $-2$ are associates, meaning that they are the same up to multiplication by a unit. One statement of the Fundamental Theorem of Arithmetic is that every integer (of absolute value at least two) is prime or factors uniquely into a product of primes up to the order of the factors and up to associates. That means that the list prime factors (up to associates) that appear in the factorization of $6$ is an invariant of $6$, and the number of prime factors (allowing for repetition) in any factorization of $6$ is another invariant (and it's well-defined). Let's call it the length of $6$. But if we were to let $1$ or $-1$ be prime? Goodbye, fundamental theorem! We could write $6 = 2\cdot 3$, or $6 = 1\cdot 1\cdot 1 \cdots 1 \cdot 2 \cdot 3$, or $6 = (-1)\cdot (-2) \cdot 3$. We have cursed ourselves with the bounty of infinitely many distinct possible factorizations of $6$ into a product primes (even accounting for the order of the factors or associates), and we can't even agree on the length of $6$. Or $2$. Or $1$. The skeptical, critical-thinking reader has already been working on workarounds. Take the minimum number of factors as the length. Write down the list of prime factors without their powers. Keep the associates in the list (or throw them out, but at that point, just agree that $1$ and $-1$ shouldn't be prime!). But in the polynomial ring $F[x]$, dear reader, every nonzero constant polynomial is a unit: given $p(x) = a$ for some nonzero $a \in F$, the polynomial $d(x) = a^{(-1)}$ is also in $F[x]$ since $a^{(-1)}$ is in the field $F$, and $p(x)d(x) = 1$, the multiplicative identity in $F[x]$. So, if you allow units to be irreducible in $F[x]$, now even an innocent (and formerly irreducible) polynomial like $x$ has infinitely many factorizations into things like ($a)(1/a)(b)(1/b)\cdots x$. So much for those workarounds! So, since we like our Fundamental Theorems to be neat, tidy, and useful, we agree to exclude units from our definitions of prime and composite (or irreducible and reducible, or indecomposable and decomposable, or…). More Consequences (or Precursors) Lately I've been working on problems related to semigroups, by which I mean nonempty sets equipped with an associative binary operation — and I also insist that my semigroups be commutative and have a unit element. In the study of factorization in semigroups, the Fundamental Theorem of Arithmetic leads to the idea of counting the distinct factors an element can have in any factorization into atoms (the semigroup equivalent of irreducible/prime elements; these are elements $p$ that factor only into products involving units and associates of $p$). One of my favorite (multiplicative) semigroups is $\mathbb{Z}[\sqrt{-5}] = \{a + b \sqrt{-5} \, : \, a,b \in \mathbb{Z}\}$, favored because the element $6$ factors distinctly into two different products of irreducibles! In this semigroup, $6 = 2\cdot 3$ and $6 = (1+\sqrt{-5})(1-\sqrt{-5})$. It's a nice exercise to show that $1\pm \sqrt{-5}$ are not associates of $2$ or $3$, yielding two distinct factorizations into atoms! While we aren't lucky enough to have unique factorization, at least we have that the number of irreducible factors in any factorization of $6$ is always two. That is, excluding units from our list of atoms leads to an invariant of $6$ in the semigroup $\mathbb{Z}[\sqrt{-5}]$. Anyway, without the context of more general situations like this semigroup (and I don't know, is $\mathbb{Z}[\sqrt{-5}]$ one of those platonically true things, or were Gauss et al. just really imaginative weirdos?), would we feel so strongly that $1$ is not a prime integer? Still More Consequences (or precursors) Reminding ourselves yet again that the integers form a ring under addition and multiplication, we might be interested in the ideals generated by prime numbers. (What's an ideal? It's a nonempty subset of the ring closed under addition, additive inverses, and scalar multiplication from the ring.) We might even call those ideals prime ideals, and then generalize to other rings! The thing is, if we do that, we end up with this definition: (Book A and B agree here:) An ideal $P$ is prime provided $xy \in P$ implies $x$ or $y$ belongs to $P$. But in the case of the integers — a principal ideal domain! — that means that a product $ab$ belongs to the principal ideal generated by the prime $p$ precisely when $p$ divides one of the factors. From the perspective of rings, every (nonzero) ring has two trivial ideals: the ring $R$ itself (and if $R$ has unity, then that ideal is generated by $1$, or any other unit in $R$) and the zero ideal (generated by $0$). If we want the study of prime ideals to be the study of interesting ideals, then we want to exclude units from our list of potential primes. And once we do, we recover nice results like an ideal $P$ is prime in a commutative ring with unity if and only if $R/P$ is an integral domain. I still have two books propped open on my desk, and after thinking about semigroups and ideals, I'm no closer to answering the question "But what is a prime, really?" than I was at the start of this column! All I have is some pretty good evidence that we, as mathematicians, might find it useful to exclude units from the prime-or-composite dichotomy (I haven't consulted with the mathematicians on other planets, though). To me, that evidence is a reminder that we are constantly updating our mathematics framework in reference to what we learn as we do more math. We look back at these ideas that seemed so solid when we started — something fundamentally indivisible in some way — and realize that we're making it up as we go along. (And ignoring a lot of what other humans consider math, too, as we insist on our axioms and law of the excluded middle and the rest of the apparatus of "modern mathematics" while we're making it up…) And the math that gets done, the math that allows us to update our framework… Well, that depends on what is trendy/fundable/publishable, who is trendy/fundable/publishable, and who is making all of those decisions. Perhaps, on planet Blarglesnort, math looks very different. Anderson, Marlow; Feil, Todd. A first course in abstract algebra. Rings, groups, and fields. Third edition. ISBN: 9781482245523. Dummit, David S.; Foote, Richard M. Abstract algebra. Third edition. ISBN: 0471433349. Redfield, Robert. Abstract algebra. A concrete introduction. First edition. ISBN: 9780201437218. Geroldinger, Alfred; Halter-Koch, Franz. Non-unique factorizations. Algebraic, combinatorial and analytic theory. ISBN: 9781584885764. By: Ursula Whitcher In: 2021, Algebra and Number Theory, Discrete Math and Combinatorics, Joseph Malkevitch Mathematics too has profited from the idea that sometimes things of interest might have a structure which allowed them to be decomposed into simpler parts… Joe Malkevitch York College (CUNY) One way to get insights into something one is trying to understand better is to break the thing down into its component parts, something simpler. Physicists and chemists found this approach very productive—to understand water or salt it was realized that common table salt was sodium chloride, a compound made of two elements, sodium and chlorine and that water was created from hydrogen and oxygen. Eventually, many elements (not the Elements of Euclid!) were discovered. The patterns noticed in these building block elements lead to the theoretical construct called the periodic table, which showed that various elements seemed to be related to each other. The table suggested that there might be elements which existed but had not been noticed; the "holes" in the table were filled when these elements were discovered, sometimes because missing entries were sought out. The table also suggested "trans-uranium" elements, which did not seem to exist in the physical world but could be created, and were created, in the laboratory. These new elements were in part created because the periodic table suggested approaches as to how to manufacture them. The work done related to understanding the structure of the periodic table suggested and lead to the idea that elements were also made up of even smaller pieces. This progression of insight lead to the idea of atoms, and that atoms too might have structure lead to the idea of subatomic particles. But some of these "fundamental" particles could be decomposed into smaller "parts." We now have a zoo of quarks and other "pieces" to help us understand the complexities of the matter we see in the world around us. Crystals of gallium, an element whose existence was predicted using the periodic table. Photo by Wikipedia user Foobar, CC BY-SA 3.0. Prime patterns Mathematics too has profited from the idea that sometimes things of interest might have a structure which allowed them to be decomposed into simpler parts. A good example is the number 111111111. It is an interesting pattern already, because all of its digits are 1's when written in base 10. We could compare 111111111 with the number that it represents when it is interpreted in base 2 (binary)—here it represents 511. But it might be interesting to study any relation between numbers with all 1's as digits and compare them to numbers in other bases, not only base 2! Mathematics grows when someone, perhaps a computer, identifies a pattern which can be shown to hold in general, rather than for the specific example that inspired the investigation. A number of the form 1111….1 is called a repunit. Can we find interesting patterns involving repunits? One approach to decomposing a number (here strings of digits are to be interpreted as being written in base 10) is to see if a number can be written as the product of two other numbers different from 1 and itself. For example, 17 can only be written as the product of the two numbers 17 and 1. On the other hand, 16 can be written as something simpler, as $2 \times 8$ but there are "simpler" ways to write 16, as $4 \times 4$, and since 4 can be decomposed as $2 \times 2$ we realize that 16 can be written as $2 \times 2 \times 2 \times 2$. Seeing this pattern, mathematicians are trained to ask questions, such as whether numbers which are all the products of the same number which cannot be broken down have any special interest. But we are getting ahead of ourselves here. What are the "atoms" of the multiplicative aspect of integers? These are the numbers called the primes, 2, 3, 5, 7, 11, … Notice that 11 is also a repunit. This takes us back to the idea that there might be many numbers of the form 11111…….111111 that are prime! Are there infinitely many primes all of whose digits in their base 10 representation are one? Answer!! No one knows. But it has been conjectured that there might be infinitely many repunit primes. Note that numbers like 111, 111111, 111111111, … where the number of digits is a multiple of 3, can't be prime. Note 11 base 2 is 3, which is also prime. Similarly, 1111111111111111111 is prime, and 1111111111111111111 base 2 represents the number 524287, which is also prime. If a repunit is prime, must the decimal number it represents treated as a base 2 number be prime? By looking for "parts" that were building blocks for the integers, mathematics has opened a rich array of questions and ideas many of which have spawned major new mathematical ideas both theoretical and applied. Having found the notion of prime number as a building block of the positive number system, there are natural and "unnatural" questions to ask: 1. Are there infinitely many different primes? 2. Is there a "simple" function (formula) which generates all of the primes, or if not all primes, only primes? While the fact that there are infinitely many primes was already known by the time of Euclid, the irregularity of the primes continues to be a source of investigations to this day. Thus, the early discovered pattern that there seemed to be pairs of primes differing by two (e.g. 11 and 13, 41 and 43, 139 and 141), which lead to the "guess" that perhaps there are infinitely many numbers of the form $p$ and $p+2$ that are both primes (known as twin primes) is still unsolved today. While more and more powerful computers made possible finding larger and larger twin prime pairs, no one could find a proof of the fact that there might be infinitely many such pairs. There were attempts to approach this issue via a more general question. Were there always primes which were some fixed bound of numbers apart? Little progress was made on this problem until in 2013 a mathematician whose name was not widely known in the community showed that there was a large finite uniform bound on a fixed size gap which could appear infinitely many times. This work by Yitang Zhang set off a concerted search to improve his methods and alter them in a way to get better bounds for the size of this gap. While Zhang's breakthrough has been improved greatly, the current state of affairs is still far from proving that the twin-prime conjecture is true. Photo of Yitang Zhang. Courtesy of Wikipedia. Mathematical ideas are important as a playground in which to discover more mathematical ideas, thus enriching our understanding of mathematics as an academic subject and sometimes making connections between mathematics and other academic subjects. Today there are strong ties between mathematics and computer science, an academic subject that did not even exist when I was a public school student. Mathematics can be applied in ways that not long ago could not even be imagined, no less carried out. Who would have thought that the primes would help make possible communication that prevents snooping by others as well as protecting the security of digital transactions? From ancient times, codes and ciphers were used to make it possible to communicate, often in military situations, so that should a communication fall into enemy hands it would not assist them. (Codes involve the replacement of words or strings of words with some replacement symbol(s), while ciphers refer to replacing each letter in an alphabet with some other letter in order to disguise the meaning of the original text.) Human ingenuity has been remarkable in developing clever systems for carrying out encryption of text rapidly and has allowed the receiver to decrypt the message in a reasonable amount of time but would, at the very least, slow down an enemy who came into possession of the message. But the development of calculators and digital computers made it harder to protect encrypted messages, because many systems could be attacked by a combination of brute force (try all possible cases) together with using ideas about how the design of the code worked. There was also the development of statistical methods based on frequency of letters and/or words used in particular languages that were employed to break codes. You can find more about the interactions between mathematics, ciphers, and internet security in the April 2006 Feature Column! Earlier we looked at "decomposing" numbers into their prime parts in a multiplication of numbers setting. Remarkably, a problem about decomposing numbers under addition has stymied mathematics for many years, despite the simplicity of stating the problem. The problem is named for Christian Goldbach (1690-1764). Letter from Goldbach to Euler asking about what it is now known as Goldbach's Conjecture. Image courtesy of Wikipedia. Goldbach's Conjecture (1742) Given an even positive integer $n$, $n$ can be written as the sum of two primes. For example, 10 = 3 +7 or also 5 +5, 20 = 3 + 17, 30 = 11 + 19. We allow the primes to be either the same or different in the decomposition. While computers have churned out larger and larger even numbers for which the conjecture is true, the problem is still open after hundreds of years. What importance should one attach to answering a particular mathematical question? This is not an easy issue to address. Some mathematical questions seem to be "roadblocks" to getting insights into what seem to be important questions in one area of mathematics and in some cases answering a mathematical question seems to open doors on many mathematical issues. Another measure of importance might be in terms of aesthetic properties of a particular mathematical result. The aesthetic may be from the viewpoint that something seems surprising or unexpected or the aesthetic may be that a result seems to have "beauty"—a trait that whether one is talking about beautiful music, fabrics, poems etc. seems to differ greatly from one person to the next. It is hard to devise an objective yardstick for beauty. Another scale of importance is the "value" of a mathematical result to areas of knowledge outside of mathematics. Some results in mathematics have proved to be insightful in many academic disciplines like physics, chemistry, biology but other mathematics seems only to be relevant to mathematics itself. What seems remarkable is that over and over again mathematics that seemed only to have value within mathematics itself or to be only of "theoretical" importance, has found use outside of mathematics. Earlier I mentioned some applications of mathematical ideas to constructing ciphers to hide information. There are also codes designed to correct errors in binary strings and to compress binary strings. Cell phones and streaming video use these kinds of ideas: it would not be possible to have the technologies we now have without the mathematical ideas behind error correction and data compression. The word decompose has some connotations in common with the word partition. Each of these words suggests breaking up something into pieces. Often common parlance guides the use of the technical vocabulary that we use in mathematics, but in mathematics one often tries to be very careful to be precise in what meaning one wants a word to have. Sometimes in popularizing mathematics this attempt to be precise is the enemy of the public's understanding the mathematics involved. Sometimes precise words are used to define a concept which are mathematically precise but obscure the big picture of what the collection of ideas/concepts that are being defined precisely are getting at. Here I try to use "mathematical terminology" to show the bigger picture of the ideas involved. Given a positive integer $n$, we can write $n$ as a sum of positive integers in different ways. For example, $3 = 3$, $3 = 2+1$, and $3 = 1 + 1 + 1$. In counting the number of decompositions possible, I will not take order of the summands into account—thus, $1 +2$ and $2 +1$ will be considered the same decomposition. Each of these decompositions is considered to be a partition of 3. In listing the partitions of a particular number $n$, it is common to use a variant of set theory notation where the entries in set brackets below can be repeated. Sometimes the word multiset is used to generalize the idea of set, so that we can repeat the same element in a set. Thus we can write the partitions of three as $\{3\}$, $\{2,1\}$, $\{1,1,1\}$. A very natural question is to count how many different partitions there are of $n$ for a given positive integer. You can verify that there are 5 partitions of the number 4, and 11 partitions of the number 5. Although for very large values of $n$ the number of partitions of $n$ has been computed, there is no known formula which computes the number of partitions of $n$ for a given positive integer $n$. Sometimes the definition of partition insists that the parts making up the partition be listed in a particular order. It is usual to require the numbers in the partition not to increase as they are written out. I will use this notational convention here: The partitions of 4 are: $\{4\}$, $\{3,1\}$, $\{2, 2\}$, $\{2, 1, 1\}$, $\{1,1,1,1\}$. Sometimes in denoting partitions with this convention exponents are used to indicate runs of parts: $4$; $3,1$; $2^2$; $2, 1^2$; $1^4$. The notation for representing partitions varies a lot from one place to another. In some places for the partition of 4 consisting of $2 + 1 + 1$ one sees $\{2,1,1\}$, $2+1+1$, $211$ or $2 1^2$ and other variants as well! It may be worth noting before continuing on that we have looked at partitions of $n$ in terms of the sum of smaller positive integers but there is another variant that leads in a very different direction. This involves the partition of the set $\{1,2,3,\dots,n\}$ rather than the partition of the number $n$. In this framework the partition of a set $S$ consists of a set of non-empty subsets of the set $S$ whose union is $S$. (Remember that the union of two sets $U$ and $V$ lumps together the elements of $U$ and $V$ and throws away the duplicates.) Example: Partition the set $\{1,2,3\}$: $$\{1,2,3\}, \{1,2\} \cup \{3\}, \{1, 3\} \cup \{2\}, \{2,3\} \cup \{1\}, \{1\} \cup \{2\} \cup {3}$$ While there are 3 partitions of the number 3 there are 5 partitions of the set $\{1,2,3\}$. The number of partitions of $\{1,2,3,\dots,n\}$ are counted by the Bell numbers, developed by Eric Temple Bell (1883-1960). While the "standard" name for these numbers now honors Bell, other scholars prior to Bell also studied what today are known as the Bell numbers, including the Indian mathematician Srinivasa Ramanujan (1887-1920). A sketch of Eric Temple Bell. Courtesy of Wikipedia. Partitions have proved to be a particularly intriguing playground for studying patterns related to numbers and have been used to frame new questions related to other parts of mathematics. When considering a partition of a particular number $n$, one can think about different properties of the entries in one of the partitions: How many parts are there? How many of the parts are odd? How many of the parts are even? How many distinct parts are there? For example, for the partition $\{3, 2, 1, 1\}$, this partition has 4 parts, the number of odd parts is 3, the number of even parts 1, and the number of distinct parts is 3. Closely related to partitions is using diagrams to represent partitions. There are various versions of these diagrams, some with dots for the entries in the partition and others with cells where the cell counts in the rows are the key to the numbers making up the partition. Thus for the partition $3+2+1$ of 6 one could display this partition in a visual way: There are various conventions about how to draw such diagrams. One might use X's as above but traditionally dots are used or square cells that abut one another. These are known as Ferrers's (for Norman Macleod Ferrers, 1829-1903) diagrams or sometimes tableaux, or Young's Tableaux. The name honors Alfred Young (1873-1940). Young was a British mathematician and introduced the notion which bears his name in 1900. Norman Ferrers. Image courtesy of Wikipedia. The term Young's Tableau is also used for diagrams such as the one below where numbers chosen in various ways are placed inside the cells of the diagram. A representation of the partition 10 with parts 5, 4, 1 A representation of the partition of 11 ($\{5,3,2,1\}$) using rows of dots. While these diagrams show partitions of 10 and 11 by reading across the rows, one also sees that these diagrams display partitions of 10, and 11, namely, 3,2,2,1 and 4,3,2,1,1 respectively, by reading in the vertical direction rather than the horizontal direction. Thus, each Ferrers's diagram gives rise to two partitions, which are called conjugate partitions. Some diagrams will read the same in both the horizontal and vertical directions; such partitions are called self-conjugate. Experiment to see if you can convince yourself that the number of self-conjugate partitions of $n$ is the same as the number of partitions of $n$ with odd parts that are all different! The next figure collects Ferrers's diagrams for the partitions of small integers. Ferrers's diagrams of partitions of the integers starting with 1, lower right, and increasing to partitions of 7. Courtesy of Wikipedia. Often to get insights into mathematical phenomena one needs data. Here for example, complementing the previous figure, is a table of the ways to write the number $n$ as the sum of $k$ parts. For example, 8 can be written as a sum of two parts in 4 ways. These are the partitions of 8 which have two parts: $7+1$, $6+2$, $5+3$, and $4+4$. $n$ into $k$ parts 1 2 3 4 5 6 7 8 Fill in next row!! Table: Partition of the number $n$ into $k$ parts While many people have contributed to the development of the theory of partitions, the prolific Leonhard Euler (1707-1783) was one of the first. Leonhard Euler. Image courtesy of Wikipedia. Euler was one of the most profound contributors to mathematics over a wide range of domains, including number theory, to which ideas related to partitions in part belongs. Euler showed a surprising result related to what today are called figurate numbers. In particular he discovered a result related to pentagonal numbers. Euler was fond of using power series (a generalized polynomial with infinitely many terms) which in the area of mathematics dealing with counting problems, combinatorics, is related to generating functions. If one draws a square array of dots, one sees $1, 4, 9, 16, \dots$ dots in the pattern that one draws. What happens when one draws triangular, pentagonal, or hexagonal arrays of dots? In the next two figures, we see a sample of the many ways one can visualize the pentagonal numbers: $1, 5, 12, 22, \dots$ (a) (b) Two ways of coding the pentagonal numbers. Courtesy of Wikipedia. The pentagonal numbers for side lengths of the pentagon from 2 to 6. Courtesy of Wikipedia. Godfrey Harold Hardy (1877-1947), Ramanujan and in more modern times, George Andrews and Richard Stanley have been important contributors to a deeper understanding of the patterns implicit in partitions and ways to prove that the patterns that are observed are universally correct. Photo of G. H. Hardy. Photo courtesy of Wikipedia. Srinivasa Ramanujan. Image courtesy of Wikipedia. Photo of George Andrews. Courtesy of Wikipedia. Photo of Richard Stanley. Courtesy of Wikipedia. What is worth noting here is that the methodology of mathematical investigations is both local and global. When one hits upon the idea of what can be learned by "decomposing' something one understands in the hope of getting a deeper understanding, it also has implications in other environments where one uses the broader notion (decomposition). So understanding primes as building blocks encourages one to investigate primes locally in the narrow arena of integers but also makes one think about other kinds of decompositions that might apply to integers. We are interested in not only decompositions of integers from a multiplicative point of view but also decompositions of integers from an additive point of view. Here in a narrow sense one sees problems like the Goldbach Conjecture but in a broader sense it relates to the much larger playground of the partitions of integers. When one develops a new approach to looking at a situation (e.g. decomposing something into parts) mathematicians invariably try to "export" the ideas discovered in a narrow setting to something more global, including areas of mathematics that are far from where the original results were obtained. So if decomposition is useful in number theory, why not try to understand decompositions in geometry as well? Thus, there is a whole field of decompositions dealing with plane polygons, where the decompositions are usually called dissections. As an example of a pattern which has been discovered relatively recently and which illustrates that there are intriguing mathematical ideas still to be discovered and explored, consider this table: Partition Distinct elements Number 1's 3+1 2 1 2+1+1 2 2 1+1+1+1 1 4 Total 7 7 See anything interesting—some pattern? Not all of the column entries are odd or even in the second and third columns. However, Richard Stanley noted that the sum of the second and third columns are equal! Both add to 7. And this is true for all values of $n$. How might one prove such a result? One approach would be to find a formula (function involving $n$) for the number of distinct elements in the partitions of $n$ and also find a formula for the number of 1's in the partitions of $n$. If these two formulas are the same for each value of $n$, then it follows that we have a proof of the general situation that is illustrated for the example $n = 4$ in the table above. However, it seems unlikely that there is a way to write down closed form formulas for either of these two different quantities. However, there is a clever approach to dealing with the observation above that is also related to the discovery of Georg Cantor (1845-1918) that there are sets with different sizes of infinity. Consider the two sets, $\mathbb{Z}^+$ the set of all positive integers and the set $\mathbb{E}$ of all of the even positive integers. $\mathbb{Z}^+ = \{1,2,3,4,5, \dots\}$ and $\mathbb{E}={2,4,6,8,10,\dots}$. Both of these are infinite sets. Now consider the table: 1 paired with 2 Note that each of the entries in $\mathbb{Z}^+$ will have an entry on the left in this "count" and each even number, the numbers in $\mathbb{E}$, will have an entry on the right in this "count." This shows that there is a one to one and onto way to pair these two sets, even though $\mathbb{E}$ is a proper subset of $\mathbb{Z}^+$ in the sense that every element of $\mathbb{E}$ appears in $\mathbb{Z}^+$ and there are elements in $\mathbb{Z}^+$ that don't appear in $\mathbb{E}$. There seems to be a sense in which $\mathbb{E}$ and $\mathbb{Z}^+$ have the same "size." This strange property of being able to pair elements of a set with a proper subset of itself can only happen for an infinite collection of things. Cantor showed that in this sense of size, often referred to as the cardinality of a set, some pairs of sets which seemed very different in size had the same cardinality (size). Thus, the set of positive integers Cantor showed had the same cardinality as the set of positive rational numbers (numbers of the form $a/b$ where $a$ and $b$ are positive integers with no common factor for $a$ and $b$). Remarkably, he was also able to show that the set of positive integers had a different cardinality from the set of real numbers. To this day there are questions dating back to Cantor's attempt to understand the different sizes that infinite sets can have that are unresolved. What many researchers are doing for old and new results about partitions is to show that counting two collections, each defined differently but for which one gets the same counts, the equality of the counts can be shown by constructing a bijection between the two different collections. When such a one-to-one and onto correspondence (function) can be shown for any value of a positive integer n, then the two collections of things must have the same size. Such bijective proofs often show more clearly the connection between seemingly unrelated things rather than showing that the two different concepts can be counted with the same formula. These bijective proofs often help generate new concepts and conjectures. Try investigating ways that decompositions might give one new insights into ideas you find intriguing. Those who can access JSTOR can find some of the papers mentioned above there. For those with access, the American Mathematical Society's MathSciNet can be used to get additional bibliographic information and reviews of some of these materials. Some of the items above can be found via the ACM Digital Library, which also provides bibliographic services. Andrews, G.E., 1998. The theory of partitions (No. 2). Cambridge University Press. Andrews, G.E. and Eriksson, K., 2004. Integer partitions. Cambridge University Press. Atallah, Mikhail J., and Marina Blanton, eds. Algorithms and theory of computation handbook, volume 2: special topics and techniques. CRC Press, 2009. Bóna, M. ed., 2015. Handbook of enumerative combinatorics (Vol. 87). CRC Press. Fulton, Mr William, and William Fulton. Young tableaux: with applications to representation theory and geometry. No. 35. Cambridge University Press, 1997. Graham, Ronald L. Handbook of combinatorics. Elsevier, 1995. Gupta H. Partitions – a survey. Journal of Res. of Nat. Bur. Standards-B Math. Sciences B. 1970 Jan 1;74:1-29. Lovász, László, József Pelikán, and Katalin Vesztergombi. Discrete mathematics: elementary and beyond. Springer Science & Business Media, 2003. Martin, George E. Counting: The art of enumerative combinatorics. Springer Science & Business Media, 2001. Matousek, Jiri. Lectures on discrete geometry. Vol. 212. Springer Science & Business Media, 2013. Menezes, Alfred J., Paul C. Van Oorschot, and Scott A. Vanstone. Handbook of applied cryptography. CRC Press, 2018. Pak, Igor. "Partition bijections, a survey." The Ramanujan Journal 12.1 (2006): 5-75. Rosen, Kenneth H., ed. Handbook of discrete and combinatorial mathematics. CRC Press, 2017. Rosen, Kenneth H., and Kamala Krithivasan. Discrete mathematics and its applications: with combinatorics and graph theory. Tata McGraw-Hill Education, 2012. Sjöstrand, Jonas. Enumerative combinatorics related to partition shapes. Diss. KTH, 2007. Stanley, Richard P. Ordered structures and partitions. Vol. 119. American Mathematical Soc., 1972. Stanley, Richard P. "What is enumerative combinatorics?." Enumerative combinatorics. Springer, Boston, MA, 1986. 1-63. Stanley, Richard P. "Enumerative Combinatorics Volume 1 second edition." Cambridge studies in advanced mathematics (2011). Stanley, Richard P., and S. Fomin. "Enumerative combinatorics. Vol. 2, volume 62 of." Cambridge Studies in Advanced Mathematics (1999). Stanley, Richard P. Catalan numbers. Cambridge University Press, 2015. Toth, Csaba D., Joseph O'Rourke, and Jacob E. Goodman, eds. Handbook of discrete and computational geometry. CRC Press, 2017. Recently on the Feature Column Rook Polynomials: A Straight-Forward Problem January 1, 2022 Alan Turing and the Countability of Computable Numbers December 1, 2021 What is a prime, and who decides? November 1, 2021 Decomposition October 1, 2021 Meet me up in space! September 1, 2021 Principal Component Analysis – Three Examples and some Theory August 1, 2021 The Battle of Numbers July 1, 2021 The Once and Future Feature Column June 1, 2021 An epidemic is a sequence of random events May 1, 2021 In Praise of Collaboration March 31, 2021 Lost (and found) in space March 1, 2021 Risk Analysis and Romance February 1, 2021 Subscribe to the Feature Column via Email Enter your email address to subscribe to the Feature Column and receive notifications of new articles by email. Adam A. Smith (1) Algebra and Number Theory (2) Bill Casselman (6) Colm Mulcahy (1) Courtney Gibbons (1) David Austin (6) Discrete Math and Combinatorics (2) Étienne Ghys (1) Guillermo Fereyra (1) History of mathematics (4) John Eggers (1) Joseph Malkevitch (6) Josh Leys (1) Math and Technology (2) Math and the Sciences (1) Mathematics and Biology (2) Moira Chas (1) Patrick Ion (1) Probability and Statistics (2) Steven H. Weintraub (1) Thomas Morrill (1) Tony Phillips (3) Ursula Whitcher (6) 5G (1) alan turing (1) ams (1) astronauts (1) Branko Grünbaum (1) cantor set (1) chess (1) Claude Shannon (1) Desargues (1) epidemic (2) Euclidean geometry (1) feature column history (1) generating functions (1) genetics (1) Geoffrey Colin Shephard (1) herd immunity (1) Higgs boson (1) Machine learning (1) measles (1) medieval (1) mirror symmetry (1) paleontology (1) Pappus's Theorem (1) partitions (2) pca (1) polar codes (1) polytopes and polyhedra (1) Predictive policing (1) PredPol (1) primes (2) quadratic formula (1) relationship trees (1) rithmomachia (1) SARS (1) SEIR model (2) simulations (1) space exploration (1) string theory (1) T-duality (1) tilings (1) Weatherball (1)
CommonCrawl
NUS ICPC Selection Contest (Vietnam National Open 2020) Dividing Kingdom The Kingdom of Byteland has $n$ cities, numbered from $1$ to $n$. There are exactly $n-1$ roads in the kingdom, each connects a pair of cities. Using these roads, it is possible to go from any city to any other city. The king of Byteland wants to divide the kingdom into two halves and pass down each half to one of his two sons. This would be done by removing exactly one road, splitting the kingdom into two connected components. This turns out to be a non-trivial task! To avoid fighting between his sons, the king needs to divide the kingdom fairly. For each half, he defines its diameter as the longest simple path connecting two cities in that half. The king wants the difference between the diameters of the two halves to be minimum. Note: A simple path from city $s$ to city $t$ is an ordered sequence of cities $v_0 \rightarrow v_1 \rightarrow v_2 \rightarrow \ldots \rightarrow v_ k$, where $v_0 = s, v_ k = t$, and all $v_ i$ are unique. For each valid index $i$, $v_ i$ and $v_{i+1}$ are connected directly by some road. The length of a path is the sum of the length of all roads connecting $v_ i$ and $v_{i+1}$. Please help the king! The input contains multiple test cases, each test case is presented as below: The first line contains a positive integer $n$ $(2 \le n \le 3 \cdot 10^5)$ — the number of cities. The sum of $n$ in all test cases does not exceed $10^6$. In the next $n-1$ lines, the $i$-th line $(i = 1 \ldots n-1)$ contains two integers $p_ i$ and $l_ i$ $(1 \le p_ i \le i, 1 \le l_ i \le 10^9)$, indicating that two cities $p_ i$ and $i+1$ are connected by a road of length $l_ i$. The input ends with a line containing a single $0$ which is not a test case. For each test case, print a single line containing a single integer — the minimum difference between the two diameters. Explanation of the first sample The figure below demonstrates the first test case. One way to obtain an optimal solution is to remove the road between city $2$ and city $7$ (marked by dashed line). By removing this road, the kingdom is split into two halves, whose cities are marked in orange and blue colors. The longest simple paths are marked in bolder colors. Problem ID: dividingkingdom Source: The 2020 ICPC Vietnam National Contest
CommonCrawl
Vol. 3, No. 7, 2009 Volume 15, 10 issues Editors' Interests ISSN: 1944-7833 (e-only) The half-twist for $U_q(\mathfrak{g})$ representations Noah Snyder and Peter Tingley Vol. 3 (2009), No. 7, 809–834 DOI: 10.2140/ant.2009.3.809 We introduce the notion of a half-ribbon Hopf algebra, which is a Hopf algebra ℋ along with a distinguished element t ∈ℋ such that (ℋ,R,C) is a ribbon Hopf algebra, where R = (t−1 ⊗ t−1)Δ(t) and C = t−2. The element t is closely related to the topological "half-twist", which twists a ribbon by 180 degrees. We construct a functor from a topological category of ribbons with half-twists to the category of representations of any half-ribbon Hopf algebra. We show that Uq(g) is a (topological) half-ribbon Hopf algebra, but that t−2 is not the standard ribbon element. For Uq(sl2), we show that there is no half-ribbon element t such that t−2 is the standard ribbon element. We then discuss how ribbon elements can be modified, and some consequences of these modifications. quantum group, Hopf algebra, ribbon category Mathematical Subject Classification 2000 Secondary: 57T05, 57M05 Received: 6 October 2008 Revised: 11 September 2009 Noah Snyder Rm 626, MC 4403 http://math.columbia.edu/~nsnyder/ Peter Tingley 77 Massachusetts Ave http://www-math.mit.edu/~ptingley/
CommonCrawl
Reading: Individuals and non-individuals in cognition and semantics: The mass/count distinction and q... Special Collection: The interpretation of the mass-count distinction across languages and populations Individuals and non-individuals in cognition and semantics: The mass/count distinction and quantity representation Darko Odic , University of British Columbia, CA Paul Pietroski, University of Maryland College Park, US Tim Hunter, University of California Los Angeles, US Justin Halberda, Johns Hopkins University, US Jeffrey Lidz Language is a sub-component of human cognition. One important, though often unattained goal for both cognitive scientists and linguists is to explicate how the meanings of words and sentences relate to the more general, non-linguistic, cognitive systems that are used to evaluate whether sentences are true or false. In the present paper, we explore one such relationship: an interface between the linguistic structures referring to individuals and non-individuals (specifically, count-nouns like 'cows' and mass-nouns like 'beef') and the non-linguistic cognitive systems that quantify and compare number and area. While humans may be flexible in how they use language across contexts, in two experiments using standard psychophysical testing we find that participants evaluate a count-noun sentence via numerical representations and evaluate a corresponding mass-noun sentence via non-numerical representations; consistent with a principled interface between language and cognition for evaluating these terms. This was the case even when the visual display was held constant across conditions and only the noun type was varied, further suggesting an important difference in how area and number, as well as count and mass nouns, are represented. These findings speak to issues concerning the semantics-cognition interface, the mass-count distinction, and the psychophysics of quantity representation. Keywords: approximate number system, quantity representation, semantics-cognition interface, count/mass nouns, quantification. How to Cite: Odic, D., Pietroski, P., Hunter, T., Halberda, J., & Lidz, J. (2018). Individuals and non-individuals in cognition and semantics: The mass/count distinction and quantity representation. Glossa: A Journal of General Linguistics, 3(1), 61. DOI: http://doi.org/10.5334/gjgl.409 Accepted on 12 Feb 2018 Submitted on 20 Apr 2017 A representational distinction between individuals (e.g., objects) and non-individuals (e.g., substances or extents) has played an important role in theories of cognitive representations (Scholl 2001; Feigenson 2007; Carey 2009) and in semantic theories focused on the formal structures that underlie linguistic meaning (Higginbotham 1994; Chierchia 1998b; Bale & Barner 2009; Rothstein 2010). For example, human infants and children have been shown to quantify and reason differently for individual objects than for piles of sand (Huntley-Fenner, Carey & Solimando 2002; Huntley-Fenner et al. 2002), suggesting that they represent objects as something more than mere aggregates of matter. Similarly, many human languages syntactically distinguish count nouns (e.g., cow, chair) from mass nouns (e.g., beef, wood), suggesting a difference in semantic representations. At first blush, the count/mass distinction might seem to be a mere syntactic coding of the object/substance distinction. But this analogy is only apparent. Intuitively, mass nouns like jewelry or furniture are used to refer to (collections of) individuals, as opposed to substances (Chierchia 1998a; Bale & Barner 2009). Count nouns like line or twig are used to talk about homogenous entities (in that any arbitrary subpart of a twig is a twig, just as an arbitrary subpart of water is water), further blurring a semantic distinction between count and mass (Mittwoch 1988; Krifka 1992). Indeed, the very things described with a plural count noun (e.g., shoes, coins, ropes) can often be described with a mass noun (e.g., footwear, change, rope), again indicating that the grammatical count-mass distinction does not align neatly with the psychological object-substance distinction. Finally, languages differ with regard to whether a lexical item is primarily a count or mass noun (e.g., hair is a mass noun in English, but a count noun in French; Chierchia 1998a). Despite the lack of a clear reduction of count and mass nouns into representations of objects and substances, investigating the link between grammatical form and cognitive representation may nonetheless prove informative for two questions about human cognition. First, do basic intuitions of magnitude (e.g., how tall is a flagpole, how many people are in the room) reflect a single generalized magnitude system, or multiple such systems, each tuned to a specific dimension of experience (Walsh 2003; Cantlon, Platt & Brannon 2009; Lourenco & Longo 2010)? Second, is there a single kind of semantic representation from which we construct the meanings of both count and mass nouns (Chierchia 1998a; b), or are the meanings of count and mass nouns drawn from two similar but distinct representational domains (Link 1983; Landman 1991; Bale & Barner 2009)? These two questions have more than a superficial similarity. In each case they ask whether apparently disparate representations are somehow unified, despite intuitive distinctions. They also ask about the unity and diversity of quantificational systems in linguistic and nonlinguistic cognition. Surprisingly, psychological investigations of magnitude and linguistic investigations of quantification have largely proceeded without significant cross-fertilization. In the current paper, we will argue that the parallels between linguistic and nonlinguistic quantification can be leveraged to inform theorizing in both domains through the interface between linguistic expressions and the cognitive systems used to verify their meanings. Moreover, in order to understand how children acquire the count/mass distinction in language, it is important to first have a clear understanding of how this distinction is represented in linguistic semantics and how it relates to cognitive magnitude representations. The latter is especially important, as these cognitive representations also undergo some development that spans the time when relevant linguistic representations are acquired (Halberda & Feigenson 2008; Odic et al. 2013). The simple act of assessing whether "More of the dots are blue than yellow" in Figure 1 requires engaging a broad array of cognitive systems. To understand this sentence, the reader must identify the meanings of the individual words and, by using basic rules of syntactic organization and semantic composition, determine how meanings of words combine in this sentence to form a larger unified meaning. But to verify the sentence – i.e., to evaluate it, as understood, for truth or falsity – one must invoke psychological capacities like visual attention, numerical magnitudes, and ordinal comparison, each of which behaves according to its own rules that are distinct from those of natural language. Language use thus depends on the existence of a tractable interface between our linguistic-semantic representations and our psychology.1 And, of course, learning the meanings of words like more will depend, at least to some extent, on these same interfaces. Example of stimuli used in Experiment 1. The difficulty of characterizing this interface has been a major stumbling block in integrating theories of quantification in linguistics and cognitive psychology, and more generally in rigorously integrating linguistic semantics with the rest of cognition (Pietroski et al. 2009; Lidz et al. 2011). As noted above, an important open question in semantics is whether count and mass terms rely on formally distinct semantic representations (Link 1983; Landman 1991; Bale & Barner 2009) or a common underlying formal structure (Chierchia 1998a; b); and an important open question in psychology is whether conceptions of number and area rely on distinct cognitive systems (Castelli, Glaser & Butterworth 2006; Cohen Kadosh, Lammertyn & Izard 2008) or a single unified magnitude system (Walsh 2003; Bueti & Walsh 2009; Lourenco & Longo 2010). By exploring the interface between lexical semantics and magnitude representations, we will argue that there are multiple distinct cognitive systems for quantification and that the count nouns and mass nouns link up to distinct semantic representations. In the first experiment, we investigate what behavioral signatures, if any, may differentiate number (object) processing from area (substance/extent) processing with stimuli that are either quite clearly about number (sets of dots) or about area (a single continuous mass). Then, in the second experiment, we turn to the question of whether the semantic distinction of count and mass nouns interfaces with the cognitive representations of number and area, even given identical displays. The combined results indicate that there are two distinct cognitive systems at play in quantification – one for quantifying number, the other for quantifying area – and that the linguistic count/mass distinction connects with these cognitive systems in way that certain classes of semantic theories would not predict. It is now well accepted among psychologists that humans can represent number in at least two ways. The first method, and the one probably most familiar to us all, is by counting and representing number exactly (Gelman & Gallistel 1978; Wynn 1992; Feigenson, Dehaene & Spelke 2004; Carey 2009). However, although such a representational system is useful, it only emerges after a lot of learning (Gelman & Gallistel 1978; Wynn 1992; Carey 2009), and it may also require a spoken/signed language (Gordon 2004; Frank et al. 2008; Carey 2009). An alternative number representational system – the Approximate Number System (ANS) – appears to be innate in both humans and other animals (Dehaene 1997; Feigenson et al. 2004; Izard et al. 2009) and is used by infants (Feigenson et al. 2004) and adults who lack number words (Pica et al. 2004) to make numerical discriminations and compute the outcomes of addition and subtraction events.2 The ANS is what gives us an intuitive feel of how many things are in a set, such as, for example, in guessing how many marbles are in a jar. In the present experiments concerning number, we focus on this gut intuitive sense of numerosity generated by the ANS. The ANS is not capable of representing number exactly. Instead, it approximates number, and represents it as a continuous Gaussian activation (for details, see Results) of several numerical values on a mental number line (Dehaene 1997; Nieder & Miller 2004). Thus, one never has knowledge of exactly how many items are in a scene – merely a rough range. Additionally, the comparison of two such activations is successful insofar as the two representations do not overlap too much – the greater the degree of overlap between two approximate number representations, the more difficult it is to discriminate between them (Green & Swets 1966). This property of the ANS results in its compliance to Weber's law – the smaller the ratio of two numbers, the worse discrimination is between them, regardless of the total number of items. Thus, for relatively high ratios, like 2.0 (10 blue: 5 yellow dots), discrimination is easy, while for relatively low ratios, like 1.2 (12 blue: 10 yellow dots), discrimination is hard and error-prone. Compliance of a numerical judgment performance to Weber's law is the primary behavioral signature of ANS use. Individuals vary in how well their ANS can discriminate numbers. An individual's discrimination abilities are measured by the internal precision of the representation – or the Weber fraction (w; Green & Swets 1966). The Weber fraction roughly corresponds to the most difficult ratio that an observer can discriminate with 75% accuracy and indicates the amount of "noise" in the internal representations (the Gaussian distributions) that make up the dimension. A person with a higher Weber fraction for a given dimension will have noisier internal representations and have a harder time discriminating between two representations within the dimension (e.g., some people will easily discriminate 10 from 8 dots, while others struggle with this discrimination). These individual differences are well behaved and can be estimated for each person by precise mathematical models. The ANS supports a sense of numerical magnitude. But humans and non-human animals can represent other magnitudes as well. These other magnitude representations also rely on a noisy, Gaussian representational format (Feigenson 2007; Cantlon et al. 2009). Decades of work on various cognitive continua, including length, brightness, pitch, time, and area have suggested that humans represent various "approximate" dimensions, which all follow Weber's law (e.g., 10 seconds versus 5 seconds is easier to discriminate than 12 seconds versus 10 seconds). This similarity in discrimination behavior (i.e., discrimination that obeys Weber's law) has led several researchers to propose that a single, domain general magnitude system may underlie all our judgments about quantity (Walsh 2003; Bueti & Walsh 2009; Lourenco & Longo 2010). Under one version of this view, our quantity representations do not differentiate between objects and extents – both object-related quantities, like number, and extent-related quantities, like area, are encoded on the exact same mental quantity line by identical sets of noisy Gaussian representations. In Experiment 1, we put this idea to the test, and look for differences in the Weber fraction that describes the underlying noise signature for area and number discrimination within a subject. While discrimination in many dimensions (e.g., number, area, brightness, loudness) obeys Weber's law, these dimensions may not all have precisely the same Weber fraction within an individual (Feigenson 2007). It is possible that, for example, area information may be represented with higher precision (i.e., a lower Weber fraction) than number information, and this difference would be consistent with different internal processes being engaged when representing and verifying the values of these dimensions. If there were only a single domain general magnitude system for both area and number, then, at a first pass, the Weber fraction for representing area should be the same as that for representing number (though perhaps small systematic differences might arise from low-level perceptual differences in processing each type of information). Even then, however, because a common representation is giving rise to all magnitude discriminations, the Weber fractions should at least be strongly, if not perfectly, correlated, as they are measuring the exact same parameter. On the other hand, if number discrimination performance results in different, uncorrelated, Weber fractions (e.g., area is better than number), then two distinct magnitude systems may be involved, each tuned to just one of these two dimensions (e.g., an Approximate Number System, and a separate Approximate Area System). A distinction between area and number processing can then be further explored in the interface with language.3 In the present experiments we assess the Weber fraction for number and for area and compare these. Previous research on area Weber fractions has been very mixed, with some work suggesting that area representations show poor Weber fractions (Morgan 2005) and others suggesting that it shows relatively good Weber fractions (Nachmias 2008), and correlations between area and number tasks have never been examined. Likewise, some work has suggested that, given a choice of encoding a set of objects by either number or area, number tends to be preferred by children (Gathercole 1985; Barner & Snedeker 2006; Cantlon, Safford & Brannon 2010) and infants (Cordes & Brannon 2008). Due to the conflicting literature, it is unclear that we have evidence for or against a similarity in Weber fractions between number and area representations. To investigate the precision of area discrimination, in Experiment 1 we presented adult subjects with non-geometric figures and asked them to discriminate which of two colors was larger in area (i.e., "Is more of this blob blue or yellow"; Figure 1a). These images were also created so that the competing dimensions of number, line length, and aspect-ratio could not be used (Morgan 2005; Nachmias 2008). This was contrasted with a task where the same images were converted into displays of dots where total area was varied, and subjects had to answer a count-noun question which, given the stimuli presented, clearly required number discrimination (i.e., "Are more of these dots blue or yellow"; Figure 1b). Performance across ratios was modeled with a psychometric equation to determine the Weber fraction that best describes approximate number and approximate area discrimination. Should the two Weber fractions differ, we would have some initial evidence for a different behavioral patterns in area and number processing that may be indicative of a difference in the representational systems used. Additionally, should the two judgments use different representations, we should expect no correlations in Weber fractions across the two tasks. We first test for different Weber fractions for area and number processing using unambiguous displays in Experiment 1 (e.g., Figure 1), and then turn to displays that are ambiguous between area and number in Experiment 2. 2 Experiment 1 2.1.1 Subjects Participants were 16 college-age adults, naïve to the purpose of the experiment, who either volunteered or were compensated $10 for their time. 2.1.2 Materials & apparatus Each participant was individually tested in a dimly lit room. The experiment was presented on a Macintosh Pro with a 22" LCD screen. Participants were seated about 42 cm away from the monitor with their heads unrestrained. All programs were custom made in a Java environment. During the Area task, participants were presented with a blob image that was divided into a yellow and blue part (Figure 1a). The generation of blob images was done in three steps. In the first step, we generated 26 unique outlines; care was taken to ensure that the outlines were curvilinear and natural, resembling a mass of stuff on a page. In the second stage, the outlines were divided into a blue and yellow area. For each outline, we generated 132 splits of blue and yellow. The lines that cut the blob into two areas were also made to look natural and curvilinear. This method gave us a broad range of areas and perimeters for the blue and yellow sections, while retaining a non-geometric look (cf. Tegthsoonian 1965). In the final stage, the blue and yellow areas were measured using a custom-made pixel-counting program. Ratios were determined by dividing the larger area (in number of pixels) by the smaller area. Overall, we generated over 3,400 blob images, but only administered those with ratios varying from the easy 2.2 (approx. 11:5 pixels) to the very difficult 1.01 (approx. 101: 100 pixels); the displays with ratios of over 2.2 were deemed too easy during pilot testing, and were not administered. For the Number task, participants were presented with an image of blue and yellow dots on the screen (Figure 1b). To create the dot images, we took our blob images and used a custom-made Python program to extract circles of various radii from the blob areas. During the extraction, one of three area parameters were used to determine the size relationship between the blue and yellow dots: dots were either correlated in area and number (with the winning color being larger by the same ratio in both area and number), anti-correlated (with the same ratio in both but giving opposite answers; e.g., blue wins in number but yellow wins in area), or were area controlled (area was matched in two dot sets). The dot-images had four fixed ratios: 2.0 (12:6 or 14:7 dots), 1.67 (10:6 or 15:9 dots), 1.5 (9:6 or 12:8 dots) or 1.2 (12:10 or 18:15 dots). 2.1.3 Procedure Participants were seated in front of the monitor and were instructed on their task. Each participant did both the Area and Number task, with order counterbalanced across subjects. During the Area task, participants were asked to indicate whether "More of the blob is blue or yellow", and to press the F key for "More of the blob is yellow", and the J key for "More of the blob is blue" (note that blob, like rock and string, is flexible between a mass and count reading, but that the context of the sentence and the image clearly implies that it is used as a mass noun).4 In the Number task, subjects were asked to indicate whether "More of the dots are blue or yellow", and to press the F key for "More of the dots are yellow", and the J key for "More of the dots are blue". Each trial began with a number in the center of the gray screen that indicated the remaining number of trials. Participants had to press the spacebar to begin the trial, and were told that, if tired, they could take as long as they needed before starting each individual trial. After the spacebar was pressed, the stimuli appeared for 500 milliseconds (ms), and were backward masked by an image that had several dozen small blue and yellow blobs (the same masking image was used in both tasks). No image appeared more than once. In both tasks, there were 10 practice trials at the beginning, identical to the test trials, which were excluded from analysis. In the Area task, there were 300 trials; in the Number task, where there were three types of area-controlled displays, there were 600 trials, which were evenly distributed across ratios and area-controls. After the experiment was over, the participants were debriefed. On average, the experiment took 30 minutes to complete. Our analysis was done in two parts. In the first part, we examine whether the two tasks demonstrated compliance to Weber's law. In the second part, we model which Weber fraction best describes the performance on each task, and, subsequently, compare the performance on the two tasks. In order to determine whether the Area task showed an effect of ratio and to maximize statistical power, we rounded and binned the continuously distributed ratios into six evenly spaced ratio bins. Because these bins were not identical to the Number task, we ran two separate Analyses of Variance (ANOVAs) rather than an single, omnibus one. Assuming that performance will comply with Weber's law, the two tasks can be directly compared by comparing the resulting Weber fractions, a content-independent metric of the noise in the underlying representations; this analysis is presented further below. In the case of the Area task, we ran a 2 (Order: Area-First, Number-First) × 6 (Ratio: 1.1, 1.3, 1.5, 1.7, 1.9, 2.1) Mixed-Measures ANOVA that showed a significant effect of Ratio (F(5,70) = 114.12; p < 0.01) and accounted for 89% of the variance; there was no effect of Order (F(1,14) < 1) or an Order × Ratio interaction (F(5,70) < 1). For the Number task, we ran a 2 (Order: Area-First, Number-First) × 4 (Ratio: 1.2, 1.5, 1.67, 2.0) Mixed-Measures ANOVA that also showed an effect of Ratio (F(3,42) = 160.313; p < 0.01) and accounted for 92% of the variance; once again, there was no effect of Order (F(1,14) = 2.764; p > 0.10) nor an interaction (F(3,42) = 2.313; p > 0.10). Therefore, in both cases, there was a ratio-dependent effect that is consistent with Weber's law (Figure 2). Data from Experiment 1. Next, the two conditions were modeled to determine the Weber fractions. The model used to describe the performance is one that is used widely in the psychophysics literature (Green & Swets 1966; Pica et al. 2004), where n1 and n2 refer to the quantity of each set (e.g., 20 and 10 dots), w refers to the Weber fraction, and erfc refers to the complimentary error function of a Normal/Gaussian curve: 12erfc(n1 − n22w n12 + n22) ∗ 100 M1 \documentclass[10pt]{article} \usepackage{wasysym} \usepackage[substack]{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage[mathscr]{eucal} \usepackage{mathrsfs} \usepackage{pmc} \usepackage[Euler]{upgreek} \pagestyle{empty} \oddsidemargin -1.0in \begin{document} \[ \frac{1}{2}\,erfc\,\left({\frac{{{n_1}\, - \,{n_2}}}{{\sqrt {2w} \,\sqrt {{n_1}^2\, + \,{n_2}^2} }}} \right)\, * \,100 \] \end{document} Extensive details on this model are presented in the Appendix to Lidz et al., (2011), and are only described briefly here. The model assumes that the underlying representations are distributed along a continuum of Gaussian/Normally distributed random variables. Because each representation (e.g., one triggered in response to 20 dots) is distributed across the continuum, two overlapping values, be they two numbers or sizes of blobs, will naturally representationally overlap, creating confusion. In other words, as the ratio of two quantities becomes increasingly similar (i.e., closer to a ratio of 1.0), their Gaussian representations should overlap more and participants should have a more difficult time determining which is larger resulting in decreasing accuracy at the task as a function of ratio. If both number and area processing comply with Weber's law, this same basic model can be used to fit subjects' performance with the resulting Weber fraction indicating the amount of noise in the underlying Gaussian representations of number and area. This model has only a single free parameter – the Weber fraction (w) – which indicates the amount of noise in the underlying Gaussian representations (i.e., the standard deviation of the Gaussian number or area representations). Larger w values indicate poorer discrimination of the system across all ratios. The best fitting w value was determined for each subject using the least-squares method, minimizing the squared error between the model and each observed data point. The modeled group data are presented in Figure 2. Both the Number and Area tasks were well-described by the Gaussian psychophysical model (both r2 > 0.97 for the group fits, Figure 2), confirming that Weber's law applied and returning an estimate of the Weber fraction for each task. Next, we examined whether a single Approximate System underlies performance in both tasks (as would be revealed by a non-significant difference in Weber fraction between tasks) or whether there are two distinct Gaussian systems, the Approximate Number System (ANS) and an "Approximate Area System" (AAS), which would be revealed by a significant difference in Weber fraction between these two tasks. Performance from each subject for each task was fit independently and the w values were compared. The average w value for the Area task was 0.18 (comparable with estimate for area perception in (Morgan 2005); Standard Error/SE = 0.02), while the average w for the Number task was 0.27 (comparable with estimate for number perception in (Izard & Dehaene 2008); SE = 0.03). These values were run through a paired-sample t-test which showed a significantly lower Area w (t(15) = –3.534; p < 0.01). We also examined, participant-by-participant, which w value was lower – for all but one participant the w value for the Area task was lower than the Number task. This result suggests that two different approximate systems – an Approximate Area System (AAS) and an Approximate Number System (ANS) – were engaged on the two tasks and that the AAS has less noise than the ANS across participants. One concern, however, is that this difference in Weber fraction may be due to a single magnitude system being used to make discriminations on different types of perceptual evidence. One way to address this is by considering the individual differences in Weber fraction across the two tasks. If each person relied on a single system (e.g., the ANS) in the two tasks then we would expect individual performance on the two tasks to correlate. However, the Weber fractions on the two tasks did not correlate (p > 0.25) suggesting independent sources of representational noise and, thus, that two different approximate systems were being used on the two tasks. Another test of the independence of number and area processing is to determine if the area-control manipulation within the Number task had any effect on performance. Number trials were split into the three area-control conditions used to create the displays (i.e., area-correlated, area-anti-correlated, area-controlled), and a Weber fraction (w) for each subject for each condition was determined via least squares. The average w for area-controlled trials was 0.28 (SE = 0.03), for area anti-correlated was 0.32 (SE = 0.06) and for area-correlated was 0.24 (SE = 0.01). A 3-level (Condition: Area-Correlated, Area-Anti-Correlated, Area-Controlled) Repeated Measures ANOVA found no effect of condition (F(2,30) = 2.134; p > 0.13) suggesting that area correlations did not impact participants' number decisions. This remained true even when the area-controlled condition was removed and we compared only the two most extreme trial types (i.e., area-correlated and area-anti-correlated), suggesting that area content was not used in estimating number (Hurewitz, Gelman & Schnitzer 2006; Barth 2008). The results of Experiment 1 suggest that area and number discrimination engaged distinct magnitude systems or, at the very least, distinct representations of the display. 2.3 Discussion In our first experiment, we found that number discrimination and area discrimination are each consistent with Weber's law. We also found a significant within-subject difference in the Weber fraction estimated from these two tasks suggesting that number and area estimation rely on distinct cognitive systems (i.e., an Approximate Number System – ANS – and an Approximate Area System – AAS). The possibility of distinct cognitive systems for number and area was further supported by the lack of a correlation between the Weber fractions estimated for these two tasks and the absence of an effect of area-control on the number estimation trials. A potential concern may be that participant's Weber fractions differed because of some inherent difference in the display – one display may have been easier to quantify and compare than the other. Note that if this was the case, we would expect a correlation between the number and area tasks, which we did not find; likewise, we found no influence of area on number. Ideally, however, we should expect to find a distinction between number and area processing in identical stimuli. We turn to this question, as well as the mapping of count and mass nouns to number and area processing, in the second experiment. Given that we have some preliminary evidence about the distinction between number and area processing in cognition, we can now turn to the problem of how the linguistic count/mass distinction interfaces with general cognition and makes contact with the cognitive distinction between objects and substances/extents.5 We return to the discussion of multiple quantity systems in the general discussion. 3 The mass/count distinction The mass/count distinction has been studied extensively, and we will not attempt a review here (for representative discussions see Link 1983; Higginbotham 1994; Chierchia 1998a; b; Bale & Barner 2009; Rothstein 2010). There is disagreement about how to characterize the distinction in a theoretically illuminating way,6 but, for present purposes, two standard syntactic diagnostics will suffice, at least for languages such as English: Only count nouns can be pluralized (cow/cows, beef/*beefs); relatedly, only count nouns can combine with numerical determiners (three cows, *three beef/beefs). Certain determiners only combine with count nouns (many dots/*many mud/muds); others only combine with mass nouns (much mud/*much dot/dots); and some, of particular interest here, can combine with either kind of noun (more dots/more mud). Any diagnostics for the count/mass distinction must come with the caveat that there is considerable flexibility in how nouns can be used (e.g., Frisson & Fraizer 2005). Even paradigmatic count nouns like dinosaur can have odd-sounding mass noun counterparts, as in "After the meteor struck, there was dinosaur all over the place", and paradigmatic mass nouns like mud can have odd-sounding count noun counterparts, as in "At the spa, we tried three different muds." And many nouns seem perfectly comfortable in either mode, as in "The blue rocks and guitar strings were found on some blue rock and old string". This leaves it open whether the homophony is due to multiple lexical entries that are semantically related (Frisson & Fraizer 2005) or multiple derivations from a common lexical root. Several theories have been proposed to account for the linguistic data concerning the mass/count distinction. Barner and Snedeker (2006) and Bale and Barner (2009) argue that count nouns always refer to individuals, and, given comparative count-noun sentences, are verified via number. Mass nouns, under this account, usually refer to non-individuals, and are not necessarily verified via number. One prediction of this theory is that the differences in the truth conditions between mass/unmarked and count/pluralized noun comparative sentences like "More of the blob is blue" and "More of the blobs are blue" will give rise to different verification procedures. Specifically, a quantification system that represents number should be used for count-noun sentences and a different quantification system that represents area, volume, or brightness should be used for mass-noun sentences. Another prominent mass/count theory has been put forth by Chierchia (1998a; b) and argues that both count nouns and mass nouns refer to individuals or units, but that these units are vague for mass nouns and thus need to be identified during verification (cf., Rothstein 2010). Under at least some readings of this view, the verification procedures for mass noun and count noun sentences invoke one and the same non-linguistic quantification system, namely the one that discriminates number: the speaker needs to identify the relevant unit of the visual image referred to by the mass noun, and must then count up the units. Thus, given two buckets of paint, one may judge which one has more paint by deciding that the unit of paint is a small 1×1 inch square, and then counting up the squares in each bucket (we return to discussing Chierchia's 2015 account in more detail in the General Discussion). Chierchia (1998a; b) and Rothstein (2010) posit only a weak distinction between count and mass nouns in the semantics while, Link (1983), Landman (1991) and Bale & Barner (2009) suggest there is a stronger formal distinction. Evidence from number and area cognition may be relevant to this debate, but the distinction between number and area processing demonstrated in Experiment 1 does not necessitate that there is a strong count/mass distinction in the semantics of the sort that Bale & Barner (2009) propose. For example, perhaps participants in the Area task who heard the mass noun blob were biased towards verifying the sentence via a numerical quantification systems, but, given that there was only a single blob, no numerical information was available, and participants opted for the next best thing – area discrimination (Rothstein 2010). A stronger test of the existence of two independent magnitude systems in cognition (i.e., ANS and AAS) and of the interface between these systems and a prominent count/mass distinction in the semantics would be to use identical displays and only vary the question asked (i.e., by varying the minimal syntactic difference between count and mass nouns). If count and mass nouns differ in their reference, we should find significant differences in the Weber fraction estimates for these two conditions. In particular, if count syntax maps to number processing and mass syntax to area processing, we should find Weber fractions comparable to those found in the Experiment 1. Participants were 12 adults, naïve to the purpose of the experiment, who either volunteered or were compensated $10 for their time. None had participated in the first experiment. 4.1.2 Materials and apparatus Every factor from the first experiment was retained except for the following. Participants were presented with a display containing several blue and yellow colored blobs (Figure 3). The blobs were randomly selected from a set of 18 curvilinear outlines and randomly placed on the screen. We used five ratios for both Mass and Count comparisons: 2.0, 1.5, 1.2, 1.14, and 1.12. On half of the trials, the total summed area of the colored blobs was correlated with the number (e.g., blue wins by both more dots and more area), and on the other half of the trials, the total summed area of the colored blobs was anti-correlated with number (e.g., blue wins by more dots but yellow wins by more area); area-controlled trials were removed as these trials would not generate an answer for the area question. Importantly, as in Experiment 1 the ratio by number and by area was identical on each trial, but inverted in the anti-correlated condition (e.g., if the number of dots was in a ratio of 2:1 with more yellow, then the number of pixels was in a ratio of 2:1 with more blue). This ensured that subjects saw stochastically identical displays for the count noun and mass noun conditions. Each participant did both the Count and the Mass task, with order counterbalanced across subjects. In order to further minimize any differences between the two tasks and have a consistent sentence structure, we used the noun blob in both instances, varying only the count/mass syntax through the use of either the singular/unmarked form of the noun, or the plural form. Thus, during the Mass task, participants were asked: "Is more of the blob blue or yellow", and to press the F key for "More of the blob is yellow", and the J key for "More of the blob is blue". In the Count task, they were asked: "Are more of the blobs blue or yellow", and to press the F key for "More of the blobs are yellow", and the J key for "More of the blobs are blue". Thus, all participants saw identical displays and pushed identical buttons and only the is/are and blob/s changed in the initial instructions. Our question was whether this small change in syntax would result in subjects recruiting different verification procedures and, thus, distinct non-linguistic magnitude systems as revealed by different Weber fractions. We predicted that success at the Mass task (i.e., "Is more of the blob blue or yellow") would engage the Approximate Area System (AAS) while success at the Count task (i.e., "Are more of the blobs blue or yellow") would engage the Approximate Number System (ANS) and this difference would be reflected in a difference in Weber fraction even when the displays were stochastically identical. Our display time remained at 500 ms, but stimuli were not masked (pilot testing reveled no effect of mask and so it was removed as some subjects found it distracting). There were 300 trials per condition. After the experiment was over, the participants were debriefed. On average, the experiment took 30 minutes to complete. We followed the same analyses we performed in Experiment 1. We first ran a 2 (Order: Mass-First, Count-First) × 2 (Task: Mass, Count) × 5 (Ratio: 2.0, 1.5, 1.2, 1.14, and 1.12) Mixed Measures ANOVA on percent correct at each ratio. There were no main effects or interactions of any factor with Order (all p > 0.20), suggesting that pragmatic effects of contrast between the two tasks are not responsible for our results. There was a significant effect of Ratio (F(1,40) = 140.66; p < 0.01) and of Task (F(1,10) = 10.93; p < 0.01). Consistent with Experiment 1, participants did significantly better in the Mass condition (Mean = 0.80; SE = 0.01) than the Count condition (Mean = 0.74; SE = 0.02). Next, we turned to modeling. The modeled group performance is presented in Figure 4. Just like in Experiment 1, the Weber fraction was fitted for each participant for each condition. The average w for the Mass condition was 0.20 (SE = 0.05; group fit r2 = 0.99) and for the Count condition was 0.29 (SE = 0.13; group fit r2 = 0.98); reflecting better discrimination in the Mass condition. This difference was significant as measured by a planned t-test (t(11) = –2.428; p < 0.05). Both these values closely mirror the values found in Experiment 1 for the Area and Number tasks. We also examined, participant-by-participant, which w value was lower – for all but one participant, the w value for the Mass task was lower than the Count task. Thus, it may be possible that the same magnitude system was used (e.g., the ANS), but that blob-area units are somehow easier to verify than blob-number units. If this were the case, the Weber fractions should be correlated across the two conditions. However, as in Experiment 1, this correlation was not significant (p > 0.30). These results provide no evidence that the same magnitude system was used in both tasks. As a final assessment of the independence of mass and count, we separated the trials into those where area and number correlated and those where they did not. In the case of the Count task, there was no difference between these two stimulus array types (t(11) = –1.78; p > 0.10), replicating the finding from Experiment 1. In the case of the Mass task, there was a significant difference (t(11) = –2.428; p < 0.05), with the anti-correlated trials (i.e., where number gives the opposite answer of area) being superior (Mean = 0.16; SE = 0.06) to the correlated trials (Mean = 0.26; SE = 0.08). Two explanations are possible for this difference in the Mass condition. First, processing number in some way interfered with processing area. Although this proposal is possible, it seems unlikely given that participants were worse on those trials where number and area agreed on an answer. A second explanation seems more likely: when number and area are anti-correlated the color that wins in area has much larger blobs than the color that wins in number (e.g., 5 yellow blobs that are twice as big as 10 blue blobs). Thus, there are two methods of finding the answer: either by summing and comparing the total area, or by comparing the largest blob in each set (in the correlated condition, only the former strategy is possible). Either one of these strategies is consistent with the participants using area rather than number, but the anti-correlated trials allow for an additional source of evidence (i.e., largest blob), thus allowing for better discrimination performance. Consistent with this suggestion, data from the Count task in Experiment 2 suggest that when participants were judging number, differences in area were efficiently ignored as there was no difference in performance between correlated and anti-correlated trials in this task. 5 General discussion In Experiment 1 we found that both number and area discrimination obey Weber's law but that these tasks result in distinct and significantly different Weber fraction (i.e., area discrimination is better than number discrimination). In Experiment 2 we found that this distinction between number and area processing is maintained when subjects are asked to make number and area judgments about identical displays. These results demonstrate that number and area processing are distinct and engage separate magnitude representations (i.e., the Approximate Number System – ANS – and the Approximate Area System – AAS). Our data is incompatible with cognitive theories that claim that numeric and non-numeric quantity representation are largely identical (Walsh 2003; Lourenco & Longo 2010). If this is the case, it is unclear why, when given identical displays, two different Weber fractions capture the participant's performance. Clearly, there must be some difference in what the participants do when the sentence meaning suggests that they should gather and compare numeric information than when the sentence meaning suggests that they should gather area information. Although differences in encoding the stimuli may be responsible for a difference in Weber fractions, this seems especially unlikely given the identical displays in Experiment 2 and given the lack of correlations between the two tasks in both experiments. Therefore, some difference in the internal noise of independent quantity systems, is likely responsible for this difference in Weber fractions (for evidence of a separation of brain regions that process area and number, see Castelli et al. 2006; Cohen Kadosh et al. 2008) We also explored the interface between the linguistic representation of mass and count syntax and the psychological representations of area and number. The results of Experiment 2 suggest that when verifying a comparative sentence containing mass noun syntax, participants are biased towards a cognitive system whose acuity is different from the cognitive system that they are biased towards when verifying a minimally different comparative sentence containing count noun syntax. Specifically, count noun syntax appears to bias towards numerical quantity as the relevant quantity and are, therefore, verified by the ANS (or, given sufficient time, counting), and mass noun syntax (given our stimuli) specified area quantity, and are, therefore, verified by the AAS. Although the present work only directly speaks to more, other determiners, such as most, should demonstrate equivalent results. Note that we are not claiming that all mass noun verifications need to occur via the AAS, nor that all count noun verifications need to occur via the ANS: e.g., given sufficient time, participants may have chosen to count the items, and given mass nouns like furniture participants may have used the ANS. Our claim is, instead, that comparative count noun sentences bias towards processing number through whatever cognitive system can represent it given the demands of the task, and that mass nouns bias towards whatever the relevant type of quantity given by the noun or context is (see below for details). Our results stand in contrast to and inform several theories about the count/mass distinction in English, about individual and non-individual representation and comparison, and about the semantics-cognition interface. First, one might suggest – in keeping with a long tradition in semantics – that details of how truth conditions should be specified are largely independent of how linguistic expressions happen to interface with particular cognitive systems; cp. Davidson (1974). One might imagine genuinely holistic minds such that specific word or sentence meanings do not constrain which cognitive systems are used to compute an answer to the question posed. For such thinkers, the ANS might be employed more often with count nouns because this tends to makes evaluation easier, or because we are familiar with enumerating whole objects (which count nouns typically refer to), and not because of the semantic content of count nouns. Thus, under such a view, there would be no relationship between truth conditions and verification procedures. Instead, this view leads us to expect only effects of the display's interaction with many possible cognitive systems. But if language had no influence over the selection of relevant systems for verification, then given the same stimuli, the same cognitive system should have been used for verification. Looking back at the second experiment, however, it is clear that the count noun question resulted in much poorer performance than the mass noun question, despite the visual stimuli being identical. The sole factor that differed between these two conditions was the linguistic form used, and so, at least prima facie, the linguistic form and the specific cognitive systems used to verify it are intimately linked. Our data also present a challenge for any semantic theory that treats both mass nouns and count nouns as having meanings that are specified (perhaps vaguely) in terms of countable atomic individuals. Other things equal, such theories predict that participants will either use whatever cognitive system is most easily accessed (which, given the above paragraph, cannot be sustained), or that there is a bias for identical verification procedures regardless of the presence of count or mass nouns. In particular, given a view that both count and mass nouns are ultimately unit-based, one should expect that the ANS verification procedure will be used in both instances. The difference, of course, would be that, in the case of area, one would need to assign a unit of area (e.g., a 10×10 pixel box), while, in the case of most count-nouns (e.g., cows, dots, people), units are immediately available. Several things suggest that this is not what subjects did. First, if number was the quantity computed in both mass and count cases, whatever representational and processing noise is affecting one task should affect the other. However, in our own data, we found no correlation between the two tasks, suggesting two distinct cognitive systems were being used. Second, if there is some form of cognitive conversion from area into number, then additional noise should exist in the case of area as a product of this conversion, resulting in a higher Weber fraction when compared to number. However, our data suggests exactly the opposite – subjects were, across the board, better at computing area. Finally, independent work from (Castelli et al. 2006) demonstrates a difference in how the brain computes area and number in displays similar to those from our Experiment 1. There appears no reason to think, then, that countable individuals – the default unit of the ANS system – are anything like the "units" of area used by the AAS. Our data is thus in line with those who maintain that the syntactic distinction between count and mass terms has a correlated semantic distinction; see (Barner & Snedeker 2006; Bale & Barner 2009), who suggest that the correct way to separate mass and count nouns is by what type of quantity they seek out during verification. The idea is that count nouns have a feature that biases towards numerical comparisons while mass nouns lack any such feature and require context to determine what approximate system it should use for verification (including, in the case of furniture, perhaps the ANS itself). Our data are consistent with this view. An important issue concerns languages that do not have a count/mass distinction, such as Cantonese, Mandarin, and Japanese, which instead communicate about discrete units using classifier phrases, analogous to the English bottles of water. At first glance, classifier languages may behave differently from the patterns observed here: given that a classifier transparently provides a unit by which the object should be subdivided, one may make a reasonable prediction that the ANS will be invoked for most classifier phrases. Recently, our group empirically tested this prediction by recruiting native Cantonese speakers, and giving them the exact same stimuli from Experiment 2, varying only whether the classifier phrase referred specifically to number, or more openly referred to portions of blue things. Contrary to the above prediction, we found that Cantonese speakers verified identical stimuli using two distinct systems – while the numeric classifier led to verification by the ANS, the portion classifier led to verification by the AAS (Odic et al. in prep.). Hence, the patterns observed in the two experiments reported here may even transparently extend to classifier languages. In a series of more recent publications, Chierchia (2010; 2015) has updated and elaborated on his proposal that the count/mass distinction is rooted in a more basic distinction between substances and objects. In this work, Chierchia claims that the meanings of mass nouns like water are akin to the meanings of plural nouns like cats, with an important twist: portions of water, unlike pluralities of cats, exhibit a special kind of divisional vagueness that precludes natural counting of minimal parts. The leading idea is that water represents examples of water as being (for practical purposes) endlessly divisible into examples of the same kind, whereas cats represents examples of (many) cats as being divisible into examples of the same kind only if the division respects the unity of individual cats. Chierchia's updated proposal is sufficiently programmatic to avoid predictions about how the contrast between pluralized noun and mass nouns should map to strategies for evaluating sentences in our test situations. But, although space prohibits us from discussing it at length, we wish to note that several key features of Chierchia's proposal seem puzzling given our data; see also Pietroski (forthcoming). Chierchia's (2010; 2015) proposal centers around the idea that mass nouns, such as water, are relevantly like pluralized nouns, such as cats. The idea is that plural nouns apply to distinctive objects – collections of some kind – which have countable elements. Thus, although there may be some vagueness (e.g., given evolution or bizarre cases) about what exactly counts as a cat, once such vagueness is resolved there is no further question about how many cats there are in a set. By contrast, there are many equally good (but overlapping) ways of carving a typical sample of water into multiple samples. On his view, water applies to certain collections, each portion of which is also water, but these collections fail to be enumerable in the relevant sense. Two problems emerge from this suggestion. First, rather than abandoning the apparent link between mass nouns and substances, Chierchia is committed to treating mass nouns such as furniture and jewelry as "fake mass nouns", despite ample evidence that, for example, children acquire these nouns at the same time as any other "real" mass noun (Barner & Snedeker 2008). Second, Chierchia's proposal invokes several problems with division and molecules: isn't a single molecule of H2O (an example of) water in any possible world, and won't this account require that each such molecule be constituted by submolecular water particles? Chierchia's reply is to relativize his technical formulation to "natural contexts", characterized as sets of worlds that are shared by competent (but typically scientifically naïve) speakers (e.g., perceiving the smallest quantity of water without any specialized machinery). But, we don't understand how linguistic competence can presuppose scientific ignorance, and apparently preclude the very hypothesis of atomism or the absence of complex machinery (where presumably, eyes aided by contact lenses are not complex, but microscopes are). Chierchia seems to be saying that in contrast with nouns like cat, nouns like water have remarkable meanings that somehow carry substantive (and false) implications about perceivable quantities of stuff. Perhaps this suggestion will turn out to be correct. One could then accommodate our experimental findings – in which the context provides a salient notion of unit that is neither minimal nor vague – by saying that the grammatical property of being a mass noun triggers a measuring strategy that is appropriate for substances across contexts – because substances exhibit divisional vagueness – while the grammatical property of being a count noun requires a counting strategy. But we suspect that all things considered, a more plausible package of views will combine our findings with the idea that mass (i.e., non-count) nouns have neutral meanings that allow for a measuring strategy, while the more complex count nouns have more restrictive meanings that call for counting. Finally, the work here broadly illustrates the usefulness of studying the interface of semantic and cognitive theories (Pietroski et al. 2009; Lidz et al. 2011). Our results suggest that, when the scene is simple and the cognitive systems involved are well-understood (e.g., the ANS and AAS), there is a lawful interface between the semantics involved in understanding a sentence and the psychology involved in verifying if the sentence is true; and in this way, empirical work can inform both psychological theorizing and semantic theorizing. In our previous work, we have suggested the Interface Transparency Thesis (ITT), which claims that the meaning of sentences exerts a bias in verification towards cognitive systems that most naturally implement the operations expressed in the meaning of the sentence. Thus, given that a sentence "More of the dots are blue" includes a request for a comparison operation (via the word more), those cognitive systems that have the ability to compute comparisons will be biased towards (i.e., most likely to be used, all else being equal) during verification. The present work also demonstrates that the operand – the noun – provides further bias, as the presence of the count noun in the above construction also biases towards those cognitive systems that can compare and represent number (e.g., ANS, or, given sufficient time, verbal counting). In the case of mass nouns this bias is especially striking, since alternative verification procedures, including ones related to ANS and individuating the blobs, were available. Thus, our results demonstrate an important similarity in both how we should treat individuals and non-individuals, for the purposes of quantification and comparison, in the semantics and in cognition. This tight relationship between semantic distinctions and cognitive distinctions should not be a surprise to anyone; in fact, it is necessary for meaning and verification to successfully occur every time we use language. What is more surprising, then, is that there has been such a large divide between cognitive and formal semantic theories of meaning. Through this work, we hope to highlight both how semantic and cognitive theories can mutually aid one another. Theories of semantics based in the truth-conditional properties of expressions can provide justification and predictions for theories of how these properties are mentally represented. And, we can draw on methods of assessing mental representations and processes from cognitive science in order to distinguish semantic theories that make similar (or identical) predictions with respect to the strictly linguistic properties of expressions. By enriching our conception of linguistic meaning to include more than representation-independent truth conditions, and by having cognitive theories constrained by the formal work of semantics, we hope to ultimately provide a theory of semantics that is both cognitively and linguistically justified. 1Readers interested in the interface between language and cognition will find further details concerning the Interface Transparency Thesis, an idea about how linguistic algorithms and cognitive algorithms may interact(Pietroski et al. 2009; Lidz et al. 2011). 2A separate system for representing number, often termed the Parallel Individuation System, is thought to underlie precise representations of 2–4 objects, but lack set-based cardinality operations (Feigenson et al. 2004). In order to avoid the possibility of participants using the parallel individuation system, all stimuli in our experiments have at least 5 items, forcing the use of the ANS. 3For psychologists interested primarily in the single magnitude system versus multiple systems debate, data concerning Weber fractions is certainly relevant and has yet to be explored within subjects, but this exploration is not the primary focus for the present article. Rather, we hope to show how any differences in number and area processing can impact theorizing in other fields (e.g., formal semantics) and how looking at the interface between cognition and language opens up new sources of evidence for both psychologists and linguists. 4In order to further alleviate this issue, we ran a new group of 20 participants in an additional Experiment. These participants completed the Area task with the same stimuli as in Experiment 1 and we instructed half (N = 10) to verify whether "More of the blob is blue or yellow", and half (N = 10) to verify whether "More of the goo is blue or yellow". Goo, unlike blob, is unambiguous in English and favors a mass interpretation. We found no effect of which sentence was used suggesting that the all participants relied on a mass interpretation of blob in our tasks (i.e., the word blob in singular, much like rock and string is understood as a mass noun). 5We wish to highlight again that we are not seeking to reduce the semantic count/mass distinction to a psychological object/substance distinction. While the semantic distinction is not reducible, count terms or mass terms may find a preferred mode of verification in the psychological magnitude representations of number and area (or continuous extents like mass). 6For example, many count/mass nouns exhibit (so-called) atomicity/homogeneity: intuitively, a cow can divide into smaller individuals, but no such sub-individual is a cow. At the same time, any portion of beef is also beef, even though beef does not divide naturally into beef atoms. But this is not a definitive criterion: consider the mass nouns furniture and succotash (Bale & Barner 2009; Rothstein 2010) argues that the count noun fence is also a counterexample. And as noted above, the difference is not ontological. Languages also often differ with regard to whether a lexical item (e.g., hair/chevaux) is primarily as a mass or count noun (Chierchia 1998b). AAS = Approximate area system, ANOVA = Analysis of variance, ANS = Approximate number system, erfc = Gauss error function, SE = standard error, w = Weber fraction. This research was funded by a NIH RO1 #90037164 awarded to J.H., and a NSERC PGS-D awarded to D.O. Bale, Alan C. & David Barner. 2009. The interpretation of functional heads: Using comparatives to explore the mass/count distinction. Journal of Semantics 26(3). 217–252. DOI: https://doi.org/10.1093/jos/ffp003 Barner, David & Jesse Snedeker. 2006. Children's early understanding of mass-count syntax: Individuation, lexical content, and the number asymmetry hypothesis. Language Learning and Development 2(3). 163–194. DOI: https://doi.org/10.1207/s15473341lld0203_2 Barth, Hilary. 2008. Judgments of discrete and continuous quantity: An illusory Stroop effect. Cognition 109(2). 251–266. DOI: https://doi.org/10.1016/j.cognition.2008.09.002 Bueti, Domenica & Vincent Walsh. 2009. The parietal cortex and the representation of time, space, number and other magnitudes. Philosophical Transactions of the Royal Society B: Biological Sciences 364(1525). 1831–1840. DOI: https://doi.org/10.1098/rstb.2009.0028 Cantlon, Jessica F., Kelley E. Safford & Elizabeth M. Brannon. 2010. Spontaneous analog number representations in 3-year-old children. Developmental Science 13(2). 289–297. DOI: https://doi.org/10.1111/j.1467-7687.2009.00887.x Cantlon, Jessica F., Michael Platt & Elizabeth M. Brannon. 2009. Beyond the number domain. Trends in Cognitive Sciences 13(2). 83–91. DOI: https://doi.org/10.1016/j.tics.2008.11.007 Carey, Susan. 2009. The origin of concepts. New York: Oxford University Press. DOI: https://doi.org/10.1093/acprof:oso/9780195367638.001.0001 Castelli, Fulvia, Daniel E. Glaser & Brian Butterworth. 2006. Discrete and analogue quantity processing in the parietal lobe: A functional MRI study. Proceedings of the National Academy of Sciences 103(12). 4693–4698. DOI: https://doi.org/10.1073/pnas.0600444103 Chierchia, Gennaro. 1998a. Plurality of mass nouns and the notion of 'semantic parameter'. Events and Grammar 70. 53–103. DOI: https://doi.org/10.1007/978-94-011-3969-4_4 Chierchia, Gennaro. 1998b. Reference to kinds across language. Natural Language Semantics 6(4). 339–405. DOI: https://doi.org/10.1023/A:1008324218506 Chierchia, Gennaro. 2015. How universal is the mass/count distinction? Three grammars of counting. In Yen-hui Audrey Li, Andrew Simpson & Wei-Tien Dylan Tsai (eds.), Chinese syntax: A cross-linguistic perspective, 147–177. Oxford, England: Oxford University Press. Cohen Kadosh, Roi, Jan Lammertyn & Veronique Izard. 2008. Are numbers special? An overview of chronometric, neuroimaging, developmental and comparative studies of magnitude representation. Progress in Neurobiology 84(2). 132–147. DOI: https://doi.org/10.1016/j.pneurobio.2007.11.001 Cordes, Sara & Elizabeth M. Brannon. 2008. The difficulties of representing continuous extent in infancy: Using number is just easier. Child Development 79(2). 476–489. DOI: https://doi.org/10.1111/j.1467-8624.2007.01137.x Davidson, Donald. 1974. Belief and the basis of meaning. Synthese 27(3). 309–323. DOI: https://doi.org/10.1007/BF00484597 Dehaene, Stanislas. 1997. The number sense. New York, NY: Oxford University Press. Feigenson, Lisa. 2007. The equality of quantity. Trends in Cognitive Sciences 11(5). 185–187. DOI: https://doi.org/10.1016/j.tics.2007.01.006 Feigenson, Lisa, Stanislas Dehaene & Elizabeth S. Spelke. 2004. Core systems of number. Trends in Cognitive Sciences 8(7). 307–314. DOI: https://doi.org/10.1016/j.tics.2004.05.002 Frank, Michael C., Daniel L. Everett, Evelina Fedorenko & Edward Gibson. 2008. Number as a cognitive technology: Evidence from Piraha language and cognition. Cognition 108(3). 819–824. DOI: https://doi.org/10.1016/j.cognition.2008.04.007 Frisson, Steven & Lyn Frazier. 2005. Carving up word meaning: Portioning and grinding. Journal of Memory and Language 53(2). 277–291. DOI: https://doi.org/10.1016/j.jml.2005.03.004 Gathercole, Virginia C. 1985. More and more and more about "More". Journal of Experimental Child Psychology 40. 73–104. DOI: https://doi.org/10.1016/0022-0965(85)90066-9 Gelman, Rochel & Charles Randy Gallistel. 1978. The child's understanding of number. Boston, MA: Harvard University Press. Gordon, Peter. 2004. Numerical cognition without words: Evidence from Amazonia. Science 306(5695). 496. DOI: https://doi.org/10.1126/science.1094492 Green, David Marvin & John A. Swets. 1966. Signal detection theory and psychophysics. Newport Beach, CA: Peninsula Pub. Higginbotham, Jim. 1994. Mass and count quantifiers. Linguistics and Philosophy 17(5). 447–480. DOI: https://doi.org/10.1007/BF00985831 Huntley-Fenner, Gavin, Susan Carey & Andrea Solimando. 2002. Objects are individuals but stuff doesn't count: Perceived rigidity and cohesiveness influence infants' representations of small groups of discrete entities. Cognition 85(3). 203–221. DOI: https://doi.org/10.1016/S0010-0277(02)00088-4 Hurewitz, Felicia W., Rochel Gelman & Brian Schnitzer. 2006. Sometimes area counts more than number. Proceedings of the National Academy of Sciences 103(51). 19599–19604. DOI: https://doi.org/10.1073/pnas.0609485103 Izard, Veronique, Coralie Sann, Elizabeth S. Spelke & Arlette Streri. 2009. Newborn infants perceive abstract numbers. Proceedings of the National Academy of Sciences 106(25). 10382–10385. DOI: https://doi.org/10.1073/pnas.0812142106 Izard, Veronique & Stanislas Dehaene. 2008. Calibrating the mental number line. Cognition 106(3). 1221–1247. DOI: https://doi.org/10.1016/j.cognition.2007.06.004 Krifka, Manfred. 1992. Thematic relations as links between nominal reference and temporal constitution. In Sag Ivan & Szabolcsi Anna (eds.), Lexical matters, 29–53. Stanford University, CA: CSLI Publications Landman, Fred. 1991. Structures for semantics. New York, NY: Springer. DOI: https://doi.org/10.1007/978-94-011-3212-1 Lidz, Jeffrey, Justin Halberda, Paul Pietroski & Tim Hunter. 2011. Interface transparency thesis and the psychosemantics of most. Natural Language Semantics 19(3). 227–256. Link, Godehard. 1983. The logical analysis of plurals and mass terms: A lattice-theoretical approach. Meaning, Use and Interpretation of Language 21. 302–323. DOI: https://doi.org/10.1515/9783110852820.302 Lourenco, Stella F. & Matthew R. Longo. 2010. General magnitude representation in human infants. Psychological Science 21(6). 873–881. DOI: https://doi.org/10.1177/0956797610370158 Mittwoch, Anita. 1988. Aspects of English aspect: On the interaction of perfect, progressive and durational phrases. Linguistics and Philosophy 11(2). 203–254. DOI: https://doi.org/10.1007/BF00632461 Morgan, Michael J. 2005. The visual computation of 2-D area by human observers. Vision Research 45(19). 2564–2570. DOI: https://doi.org/10.1016/j.visres.2005.04.004 Nachmias, Jacob. 2008. Judging spatial properties of simple figures. Vision Research 48(11). 1290–1296. DOI: https://doi.org/10.1016/j.visres.2008.02.024 Nieder, Andreas & Earl K. Miller. 2004. A parieto-frontal network for visual numerical information in the monkey. Proceedings of the National Academy of Sciences of the United States of America 101(19). 7457–7462. DOI: https://doi.org/10.1073/pnas.0402239101 Odic, Darko, Melissa E. Libertus, Lisa Feigenson & Justin Halberda. 2013. Developmental change in the acuity of approximate number and area representations. Developmental Psychology 49(6). 1103–1112. DOI: https://doi.org/10.1037/a0029472 Pica, Pierre, Cathy Lemer, Veronique Izard & Stanislas Dehaene. 2004. Exact and approximate arithmetic in an Amazonian indigene group. Science 306(5695). 499–503. DOI: https://doi.org/10.1126/science.1102085 Pietroski, Paul, Jeffrey Lidz, Tim Hunter & Justin Halberda. 2009. The Meaning of 'Most': Semantics, numerosity and psychology. Mind & Language 24(5). 554–585. DOI: https://doi.org/10.1111/j.1468-0017.2009.01374.x Rothstein, Susan. 2010. Counting and the mass/count distinction. Journal of Semantics 27(3). 343–397. DOI: https://doi.org/10.1093/jos/ffq007 Scholl, Brian J. 2001. Objects and attention: The state of the art. Cognition 80(1–2). 1–46. DOI: https://doi.org/10.1016/S0010-0277(00)00152-9 Tegthsoonian, Martha. 1965. The judgement of size. The American Journal of Psychology 78(3). 392–402. DOI: https://doi.org/10.2307/1420573 Walsh, Vincent. 2003. A theory of magnitude: Common cortical metrics of time, space and quantity. Trends in Cognitive Sciences 7(11). 483–488. DOI: https://doi.org/10.1016/j.tics.2003.09.002 Wynn, Karen. 1992. Children's acquisition of the number words and the counting system. Cognitive Psychology 24(2). 220–251. DOI: https://doi.org/10.1016/0010-0285(92)90008-P Odic, D., Pietroski, P., Hunter, T., Halberda, J. and Lidz, J., 2018. Individuals and non-individuals in cognition and semantics: The mass/count distinction and quantity representation. Glossa: a journal of general linguistics, 3(1), p.61. DOI: http://doi.org/10.5334/gjgl.409 Odic D, Pietroski P, Hunter T, Halberda J, Lidz J. Individuals and non-individuals in cognition and semantics: The mass/count distinction and quantity representation. Glossa: a journal of general linguistics. 2018;3(1):61. DOI: http://doi.org/10.5334/gjgl.409 Odic, D., Pietroski, P., Hunter, T., Halberda, J., & Lidz, J. (2018). Individuals and non-individuals in cognition and semantics: The mass/count distinction and quantity representation. Glossa: A Journal of General Linguistics, 3(1), 61. DOI: http://doi.org/10.5334/gjgl.409 Odic D and others, 'Individuals and Non-individuals in Cognition and Semantics: The Mass/count Distinction and Quantity Representation' (2018) 3 Glossa: a journal of general linguistics 61 DOI: http://doi.org/10.5334/gjgl.409 Odic, Darko, Paul Pietroski, Tim Hunter, Justin Halberda, and Jeffrey Lidz. 2018. "Individuals and Non-individuals in Cognition and Semantics: The Mass/count Distinction and Quantity Representation". Glossa: A Journal of General Linguistics 3 (1): 61. DOI: http://doi.org/10.5334/gjgl.409 Odic, Darko, Paul Pietroski, Tim Hunter, Justin Halberda, and Jeffrey Lidz. "Individuals and Non-individuals in Cognition and Semantics: The Mass/count Distinction and Quantity Representation". Glossa: A Journal of General Linguistics 3, no. 1 (2018): 61. DOI: http://doi.org/10.5334/gjgl.409 Odic, D., et al.. "Individuals and Non-individuals in Cognition and Semantics: The Mass/count Distinction and Quantity Representation". Glossa: A Journal of General Linguistics, vol. 3, no. 1, 2018, p. 61. DOI: http://doi.org/10.5334/gjgl.409
CommonCrawl
Worldbuilding Stack Exchange is a question and answer site for writers/artists using science, geography and culture to construct imaginary worlds and settings. It only takes a minute to sign up. Would a city underground in the desert make sense from a survival standpoint? In the desert, the major factors of survival are water, food and temperature. I have a city in a desert where the first two are covered by a nearby river and agriculture from that river, but would it make sense to build a city underground to beat the heat? Or would it be better to adapt the species living in the city to higher temperatures? reality-check cities architecture underground deserts HDE 226868♦ decayedarachniddecayedarachnid $\begingroup$ How far beneath the surface would the "floor" of your city then be? (If it has multiple levels, a range of depths is fine.) $\endgroup$ – type_outcast $\begingroup$ Related: worldbuilding.stackexchange.com/questions/23373/… $\endgroup$ – HDE 226868 ♦ $\begingroup$ @type_outcast Deep enough to be insulated against the sun and weather/for the temperature to be livable. I don't know the exact number, though. $\endgroup$ – decayedarachnid $\begingroup$ Coober Pedy is an excellent example of this actually happening. $\endgroup$ – Roland Heath $\begingroup$ Sometime it does rain in or near deserts, and when it does there can be severe flash floods. So make sure your underground bunker is above the flood plain, even in the desert. $\endgroup$ – RBarryYoung Yes, underground would help with heat Given food and water are covered, underground living will be cooler$^1$ and safer than living in direct sunlight, but there are some caveats: Geothermal gradient As you descend into the crust, the temperature increases in a steady, predictable fashion known as the geothermal gradient, which is about $25^{\circ}\text{C}/\text{km}$. Thus, your best bet is staying just a few meters below the surface. That way you'll get almost all of the insulating effects of the ground, while avoiding the increasing temperatures from below. Digging is hard Digging an underground city would be prohibitively difficult. Your people would be much better off to find an existing system of caves. Ventilation and heat regulation People generate a lot of heat and $\text{CO}_{2}$ that need to be exchanged for fresh, cool air. Putting multiple entry/exit points to your city will help, but you'll still need air flow, and this will certainly take a lot of effort! You didn't specify the level of technology your people are at. Fortunately I can present a fan that requires practically no technology at all: Put quite simply, you take a large piece of lightweight fabric, animal skin, whatever, and affix it to a wooden frame about $16 \times 16\text{"}$ ($40 \times 40 \text{cm}$). Make several of these. Then you have volunteers/workers/slaves at the (preferably ramped) entrances constantly push the air out. You will need one such fan for every 10–100 inhabitants depending on how densely packed the city is. Inspiration: Nuclear War Survival Skills book My inspiration came from the public domain book Nuclear War Survival Skills. Here is their diagram of the fan I described: At around p.59, they state that this fan can move 300 cubic feet/min, which is enough for 9 very crowded adults in hot weather, or up to 100 in cool weather. That book describes some other fans and is in general a great read for designing underground living on a small scale. With more technology, you can automate any type of fan somewhat by putting the fans on circular wheels or belts, and use some pulleys and gears to give the people (or beasts of burden) a mechanical advantage. Again, ventilation is necessary (and very easy to underestimate!). It will be hard work, but if done adequately, the underground would remain cool and hospitable to human life. You say you already have a nearby river. Does it run underground? Or is it at least somewhat near your underground city? If you can pipe some of it through your city, your inhabitants will have a much easier time with it, and the water will help cool the city even more. You didn't ask about lighting, so I'll keep it short: mirrors. Placed around your ventilation/entrance shafts, you can "beam" sunlight into the city. Your people would want to keep fires to a minimum, as they generate lots of carbon monoxide, which can be easily fatal. More moderate. Cooler during the daytime, and warmer during the night (a lot of deserts get quite cool during the night). type_outcasttype_outcast $\begingroup$ Geothermal gradiant? At 25C/Km, that means you could go a fairly impressive 100m down and still only get +2.5C from that. $\endgroup$ – Dronz $\begingroup$ Don't forget that water flow could easily be adapted to turn those fans too! $\endgroup$ – thanby $\begingroup$ By building the tunnels right you can avoid manual fans. For example line an area with black rock in full sunlight so it heats up, the rising air there will pull air through your network. Fireplaces can also be used to achieve the same thing by pulling air in towards them. $\endgroup$ – Tim B Your question is bit misleading. The important thing is not to be underground, the important thing is to have thermal mass of stone around you. A historical example of what you want is Petra. Similar use goes up to stone age where people lived in caves that had stable temperatures year around. In addition to Petra other examples of rock-cut architecture exist in Cappadocia and India where it was a result of suitable rock formations. In fact, I'd go so far as to say that such architecture will mainly exist in areas where you have natural caves or rock formations and easily workable stone so that the amount of work required is exceptionally low. Otherwise it will be easier to build above ground and just make the walls more massive. Typical solutions are adobe, mud brick, or packed earth which naturally allow relatively simple and easy construction of thick walls with high thermal mass. Actually building underground would generally be impractical since while the resulting architecture would indeed have relatively stable temperature the amount of work required would be higher than with other alternatives with same protection from heat. Additionally being actually underground would make it more difficult to deal with floods and sand. It is generally better to live so that gravity helps you keep your home safe from such issues. That said, it is reasonable for the desert city to have significant infrastructure underground. Underground aqueducts or qanats or likely. Similarly underground tunnels make a good source of cool air for ventilation. This would be combined with a windcatcher towers or similar. I guess you could say that the optimum is a combination of below and above ground elements. And that building above ground usually requires less labor and is the default barring natural caverns or exceptionally easily workable stone or special needs as with water conduits. Ville NiemiVille Niemi $\begingroup$ They could heap earth up around their houses and make them underground that way. They'd need strong walls and roofs. Do they have thick tree trunks? How are their roofs made now? Buildings made of stone with domed roofs could take the weight. $\endgroup$ – RedSonja $\begingroup$ @RedSonja That is what packed earth is for. Added link to the wikipedia article about it to the answer. Although it uses "rammed earth". $\endgroup$ – Ville Niemi $\begingroup$ I probably should expand on what @RedSonja said and my answer to it. The proper solution to providing thermal mass depends on the amount of thermal mass needed which depends on the magnitude of temperature variation, which is roughly proportional to the difference between maximum and minimum temperature and the cycle of the variation. While in a desert the temperature differences are large the cycle is generally between day and night. For that making the walls massive using methods I covered in the answer is enough, Earth sheltering is only needed for longer seasonal cycles. $\endgroup$ Insulation against sun, wind, rain, and temperature changes! The geothermal gradient can help keep a cave system warm. Some houses use geothermal heat pumps to warm or cool as needed, which can be extended to caves Caves can often have natural choke points, allowing for easy defense. Your city may be hard to spot. After all, it looks like any other bit of land! The Other Issues Light is an issue. Do you have skylights? Mirror systems? Do you use a lot of candles? You need to dig out caves and rely on some structural engineering to keep things up. People have had great success, otherwise! Fresh air needs to enter somehow. This requires ventilation, but that can be done. PipperChipPipperChip The fact that underground cities in the desert in our own world are very few and far between, as far as I know, might indicate that it would either require some special knowledge or else a whole lot of work to create an underground city in one. I think adapting to the heat or finding another way to cope with it would be easier, and more likely, personally, but then, I like heat, and I like to study heat-tolerant life. Nevertheless, I'm sure you could find a way to make an underground city in the desert (even a practical way), but it sounds like it might not be the easiest thing to figure out. An underground city might start to smell after a while, too (however, probably less in a desert than a humid area, is my guess). If it was close to a river, it might get a lot of water from the river running into it (and I imagine it would smell a lot more; the dry banks of rivers that once were wet can smell pretty interesting). You might consider what termites do.* They have natural air conditioning with the way their mounds are set up. This would be helpful to establish a constant temperature (deserts get cold at night). In the case of termites, they keep it at about 30° C all the time, where it fluctuates between about 0–40° C. outside. They establish a constant draft of air, and build so that the sun doesn't hit it as much at certain times. (*Note in the link that it says termites have a brain the size of a pinhead, but I've heard that science has established that the size of the brain doesn't regulate intelligence necessarily, but rather ability to control a larger body.) You might also consider that the people of your desert might have materials that reflect infrared light (which could cool down whatever they're placed around a lot). Infrared isn't some special space aged thing. It's just a color you can't really see much, if at all (unless you've got super powers), because it's outside the visible spectrum for humans. (And, infrared light heats things up, much how UV rays give people sunburns, kill microbes, and stimulate vitamin D production. Plastic and modern glass usually block UV rays, and this helps to prevent damage of some kind or other—including to vitamins in milk and such, I believe. So, you would need a color that reflects infrared to keep things inside cooler.) In our modern world, you can get clear inserts to go on windows to block infrared (and stop the house from warming up through the windows). You could do the same thing to your whole house (or city) with the right color of paint or something. BrōtsyorfuzthrāxBrōtsyorfuzthrāx $\begingroup$ Have +1 for the termite idea. Was going to add it as a comment, but has already been covered. $\endgroup$ – Darren Bartrup-Cook Survivability is probably not an issue, as others have stated - increased insulation from the heat, etc. The difficult part of this is creating a history for such a city. One doesn't simply pick a spot in the desert and decide to build a city there, there has to be a back story to explain how it got there. One possibility would be for it to have started as a normal, above-ground settlement that over time became increasingly buried under centuries of sandstorms. People started building covered walkways between the buildings to keep out the sand, and building tall chimney-like structures, both to allow for air circulation and also some with ladders to provide a means of egress from the buried buildings. These chimneys would continue to be built taller as the sand became deeper. Eventually, as modern technology became popular, they might also run electrical cables down some of the chimneys, as well as plumbing/sewage, and later telephone, ethernet, fiberoptic, etc. Some might even be converted into elevator shafts. Prior to being hooked up to electricity, their primary source of light would be torches, which produce smoke, and so these would also need to be placed near the chimney structures. It's possible you could line the edges of chimneys with polished metal to act as mirrors and bring some daylight down into the buildings, but again, this means most of the light would be near the chimneys. Some enterprising architects might see the benefit of designing arched ceilings so that any smoke from light sources would collect into the chimneys. One problem with this scenario is it does make it difficult to expand the city, as digging underground in sand is not a simple task. It's possible that additional structures would be built on the surface, which might themselves be buried by more sandstorms, so you have layer-upon-layer of city, with the deepest parts being the oldest and newer structures being closer to the surface. The most recent developments would be on the surface. Of course, any surface structures would have to be positioned such that they are not directly on top of the chimneys from the deeper chambers. Eventually, social strata would start to form based on depth. The above-ground level would be mostly traders and craftsmen, the type who would do business with outsiders most frequently. Below them would be the aristocracy, who being the wealthiest residents would choose the most comfortable lodgings - deep enough to be protected from the heat, but not so deep as to have little access to sunlight, fresh air, quick trips to the surface, etc. After that, things would go steadily downward in terms of social standing, with the poorest people relegated to living in the oldest, deepest parts of the city. Darrel HoffmanDarrel Hoffman The best thing to do would be both. Snakes, lizards, and other creatures live in the ground during the day, when it's hot, then come out at night. You species could be nocturnal, resistant to heat, and be able to come out during the day if necessary. Being underground also helps insulate against all kinds of things. Xandar The ZenonXandar The Zenon Not the answer you're looking for? Browse other questions tagged reality-check cities architecture underground deserts . Is it practical to build dirt scrapers instead of skyscrapers? Desert City - agriculture and other specifics How to live in an underground city? Different options for post-nuclear fallout societies Underground Cities Vs. Sewage Geographically, how large would a very populous city during the 16th century be? How do I build an ancient moisture vaporator? What resources would underground creatures look for in the surface?
CommonCrawl
Gauss Linear Model The Gauss statistical model says that the $(x_n,y_n)\in\mathbb{R}^p\times\mathbb{R}$ are generated as follows: there are no restrictions on the way $x_n$ are generated; given $x_n$, the $y_n$ are generated from $y_n = w\cdot x_n + \xi_n$, where $w\in\mathbb{R}^p$ is a vector of parameters, $\xi_n$ is distributed as $N(0,\sigma^2)$, and $\sigma>0$ is another parameter. See Section 8.5 of Vovk et al. (2005) and Vovk et al. (2009) for the formulation of this model as an on-line compression model. The most basic version of this model is where there are no $x$s, and the model is $y_n\sim N(0,\sigma^2)$. The summary of $y_1,\ldots,y_n$ is $t_n:=y_1^2+\cdots+y_n^2$ and the Gauss repetitive structure postulates that the distribution of $y_1,\ldots,y_n$ is uniform on the sphere of radius $t_n^{1/2}$. Borel (1914) noticed that the Gauss statistical model (used by Maxwell as a model in statistical physics) is equivalent to the Gauss repetitive structure (used for a similar purpose by Gibbs). For further historical comments, see Vovk et al. (2005), Section 8.8, and Diaconis and Freedman (1987), Section 6. Persi Diaconis and David Freedman (1987). A dozen de Finetti-style results in search of a theory. Annales de l'Institut Henri Poincare B 23:397-423. Vladimir Vovk, Alexander Gammerman and Glenn Shafer (2005). Algorithmic learning in a random world. Springer, New York. Vladimir Vovk, Ilia Nouretdinov, and Alexander Gammerman (2009). On-line predictive linear regression. Annals of Statistics 37:1566-1590. Page last modified on May 13, 2017, at 10:56 AM
CommonCrawl
Hostname: page-component-7ccbd9845f-w45k2 Total loading time: 0.78 Render date: 2023-01-27T07:22:28.282Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue false >Canadian Mathematical Bulletin >FirstView >Cyclic inner functions in growth classes and applications... Canadian Mathematical Bulletin Cyclic inner functions in growth classes and applications to approximation problems Part of: Function theory on the disc Published online by Cambridge University Press: 23 November 2022 Bartosz Malman Bartosz Malman* Division of Basic Science, KTH Royal Institute of Technology, Stockholm, Sweden e-mail: [email protected] It is well known that for any inner function $\theta $ defined in the unit disk $\mathbb {D}$ , the following two conditions: (i) there exists a sequence of polynomials $\{p_n\}_n$ such that $\lim _{n \to \infty } \theta (z) p_n(z) = 1$ for all $z \in \mathbb {D}$ and (ii) $\sup _n \| \theta p_n \|_\infty < \infty $ , are incompatible, i.e., cannot be satisfied simultaneously. However, it is also known that if we relax the second condition to allow for arbitrarily slow growth of the sequence $\{ \theta (z) p_n(z)\}_n$ as $|z| \to 1$ , then condition (i) can be met for some singular inner function. We discuss certain consequences of this fact which are related to the rate of decay of Taylor coefficients and moduli of continuity of functions in model spaces $K_\theta $ . In particular, we establish a variant of a result of Khavinson and Dyakonov on nonexistence of functions with certain smoothness properties in $K_\theta $ , and we show that the classical Aleksandrov theorem on density of continuous functions in $K_\theta $ is essentially optimal. We consider also the same questions in the context of de Branges–Rovnyak spaces $\mathcal {H}(b)$ and show that the corresponding approximation result also is optimal. Singular inner functionscyclicitymodel spacesde Branges–Rovnyak spaces MSC classification Primary: 30J15: Singular inner functions Canadian Mathematical Bulletin , First View , pp. 1 - 12 © The Author(s), 2022. Published by Cambridge University Press on behalf of The Canadian Mathematical Society Aleksandrov, A. B., Invariant subspaces of shift operators. An axiomatic approach . Zap. Nauchn. Sem. Leningrad. Otdel. Mat. Inst. Steklov. (LOMI) 113(1981), no. 264, 7–26.Google Scholar Aleman, A. and Malman, B., Density of disk algebra functions in de Branges–Rovnyak spaces . C. R. Math. Acad. Sci. Paris 355(2017), no. 8, 871–875.CrossRefGoogle Scholar Anderson, J. M., Fernandez, J. L., and Shields, A. L., Inner functions and cyclic vectors in the Bloch space . Trans. Amer. Math. Soc. 323(1991), no. 1, 429–448.CrossRefGoogle Scholar Bouya, B., Closed ideals in analytic weighted Lipschitz algebras . Adv. Math. 219(2008), no. 5, 1446–1468.CrossRefGoogle Scholar Cima, J., Matheson, A., and Ross, W., The Cauchy transform, Mathematical Surveys and Monographs, 125, American Mathematical Society, Providence, RI, 2006.CrossRefGoogle Scholar Dyakonov, K. and Khavinson, D., Smooth functions in star-invariant subspaces . Contemp. Math. 393(2006), 59.CrossRefGoogle Scholar Fricain, E. and Mashreghi, J., The theory of $\mathcal{H} $ (b) spaces . Vol. 1, New Mathematical Monographs, 20, Cambridge University Press, Cambridge, 2016.Google Scholar Garcia, S. R, Mashreghi, J., and Ross, W. T., Introduction to model spaces and their operators, Cambridge University Press, Cambridge, 2016.CrossRefGoogle Scholar Hartman, P. and Kershner, R., The structure of monotone functions . Amer. J. Math. 59(1937), no. 4, 809–822.CrossRefGoogle Scholar Khrushchev, S. V., The problem of simultaneous approximation and of removal of the singularities of Cauchy type integrals . Tr. Mat. Inst. Steklova 130(1978), 124–195.Google Scholar Korenblum, B., An extension of the Nevanlinna theory . Acta Mathematica 135(1975), 187–219.CrossRefGoogle Scholar Limani, A. and Malman, B., An abstract approach to approximation in spaces of pseudocontinuable functions . Proc. Amer. Math. Soc. 150(2022), no. 6, 2509–2519.Google Scholar Ransford, T., On the decay of singular inner functions . Can. Math. Bull. 64(2021), no. 4, 902–905.CrossRefGoogle Scholar Roberts, J., Cyclic inner functions in the Bergman spaces and weak outer functions in ${H}^p$ , $0<p<1$ . Ill. J. Math. 29(1985), no. 1, 25–38.Google Scholar Sarason, D., Sub-hardy Hilbert spaces in the unit disk, University of Arkansas Lecture Notes in the Mathematical Sciences, 10, John Wiley & Sons, Inc., New York, 1994.Google Scholar Shapiro, H. S., Weakly invertible elements in certain function spaces, and generators in ${\ell}_1$ . Michigan Math. J. 11(1964), no. 1, 161–165.CrossRefGoogle Scholar Shapiro, H. S., A class of singular functions . Can. J. Math. 20(1968), 1425–1431.CrossRefGoogle Scholar Tamrazov, P. M., Contour and solid structure properties of holomorphic functions of a complex variable . Russ. Math. Surv. 28(1973), no. 1, 141.CrossRefGoogle Scholar Bartosz Malman (a1)
CommonCrawl
12AX7 Comparison of Current Made Tubes 12AX7 Tube Diagram 3PDT Daughter Board Guide 6146 Family of Vacuum Tubes 6L6GC Comparison of Current Made Tubes 6L6GC Tube Diagram Accutronics Products and Specifications Amplifier Conversion Guide - 6L6GC to EL34 Conversion Amplifier Conversion Guide - Bassman Bass to Tremolo Amplifier Conversion Guide - Blackface/Silverface Normal channel to Bass Amplifier Conversion Guide - Blackface/Silverface Tone Stack Mod Audible Frequency Range and Describing Tone Basic Electric Guitar Circuits 1: Pickups Basic Electric Guitar Circuits 2: Potentiometers & Tone Capacitors Basic Electric Guitar Circuits 3: Switches & Output Jacks Biasing Tube Amplifier Calculator Biasing a Tube Amplifier Building Your Own Two Conductor Instrument Cable Can Capacitors - How CE Manufacturing Makes Them Capacitance Converter Capacitance Value Calculator for Molded Mica Capacitor Parallel/Series Calculator Circuit Board Prototyping Common Electric Guitar Wiring Diagrams DIY Vibrato Shorting Plug for Blackface / Silverface Amps Filters - Low Pass and High Pass Guitar Diagram Guitar Speaker Power Handling Hammond Chassis Boxes Jensen Tone Chart LED Current Limiting Resistor Calculator LED Parallel/Series Calculator Operational Amplifier (Op-Amp) Tester Orange Drop Capacitor Codes Potentiometer Taper Charts Potentiometer Types Demonstration Resistor Color Codes Resistor Parallel/Series Calculator Resistor Power Rating Calculator Resistor Value Calculator Speaker Diagram Speaker Impedance, Power Handling and Wiring Speaker Wiring Calculator Spring Reverb Tanks Designation Calculator Spring Reverb Tanks Explained and Compared Spring Reverb Tanks Explained and Compared - Videos Stomp Box Enclosures Switches - What are Poles and Throws? The Speakers for Electric Guitar Tuning Machine / Tuner Types Useful Conversion Factors Voltage Divider Wire Gauge Guide Yellow Jacket Tube Converter Technical Information Part 2: Potentiometers and Tone Capacitors What is a Potentiometer? Potentiometers, or "pots" for short, are used for volume and tone control in electric guitars. They allow us to alter the electrical resistance in a circuit at the turn of a knob. Drawing of physical potentiometers depicting terminals 1, 2, and 3 Drawing of potentiometer schematic depicting terminals 1, 2, and 3 It is useful to know the fundamental relationship between voltage, current and resistance known as Ohm's Law when understanding how electric guitar circuits work. The guitar pickups provide the voltage and current source, while the potentiometers provide the resistance. From Ohm's Law we can see how increasing resistance decreases the flow of current through a circuit, while decreasing the resistance increases the current flow. If two circuit paths are provided from a common voltage source, more current will flow through the path of least resistance. $$V = I \times R$$ where ~V~ = voltage, ~I~ = current and ~R~ = resistance V = IR diagram Basic Electric Guitar Circuit Alternative functional terminal names Terminal 1 "Cold" Terminal 2 "Wiper" Terminal 3 "Hot" A Visual Representation of how a potentiometer works Based on a 300 degree rotation We can visualize the operation of a potentiometer from the drawing above. Imagine a resistive track connected from terminal 1 to 3 of the pot. Terminal 2 is connected to a wiper that sweeps along the resistive track when the potentiometer shaft is rotated from 0° to 300°. This changes the resistance from terminals 1 to 2 and 2 to 3 simultaneously, while the resistance from terminal 1 to 3 remains the same. As the resistance from terminal 1 to 2 increases, the resistance from terminal 2 to 3 decreases, and vice-versa. Tone Control: Variable Resistors & Tone Capacitors Tone pots are connected using only terminals 1 and 2 for use as a variable resistor whose resistance increases with a clockwise shaft rotation. The tone pot works in conjunction with the tone capacitor ("cap") to serve as an adjustable high frequency drain for the signal produced by the pickups. Tone Circuit The tone pot's resistance is the same for all signal frequencies; however, the capacitor has AC impedance which varies depending on both the signal frequency and the value of capacitance as shown in the equation below. $$\text{Capacitor Impedance} = Z_{\text{capacitor}} = \frac{1}{2 \pi f C}$$ where ~f~ = frequency and ~C~ = capacitance Capacitor impedance decreases if capacitance or frequency increases.High frequencies see less impedance from the same capacitor than low frequencies. The table below shows impedance calculations for three of the most common tone cap values at a low frequency (100 Hz) and a high frequency (5 kHz) ~C~ (Capacitance) ƒ (Frequency) ~Z~ (Impedance) .022 μF 100 Hz 72.3 kΩ .022 μF 5 kHz 1.45 kΩ .047 μF 5 kHz 677 Ω .10 μF 100 Hz 15.9 kΩ .10 μF 5 kHz 318 Ω When the tone pot is set to its maximum resistance (e.g. 250kΩ), all of the frequencies (low and high) have a relatively high path of resistance to ground. As we reduce the resistance of the tone pot to 0Ω, the impedance of the capacitor has more of an impact and we gradually lose more high frequencies to ground through the tone circuit. If we use a higher value capacitor, we lose more high frequencies and get a darker, fatter sound than if we use a lower value. Volume Control: Variable Voltage Dividers Volume pots are connected using all three terminals in a way that provides a variable voltage divider for the signal from the pickups. The voltage produced by the pickups (input voltage) is connected between the volume pot terminals 1 and 3, while the guitar\'s output jack (output voltage) is connected between terminals 1 and 2. Voltage divider equation: $$V_{\text{out}} = V_{\text{in}} \times \frac{R_2}{R_1 + R_2}$$ From the voltage divider equation we can see that if ~R_1 = 0\text{Ω}~ and ~R_2 = 250\text{kΩ}~, then the output voltage will be equal to the input voltage (full volume). $$V_{\text{out}} = V_{\text{in}} \times \frac{250\text{kΩ}}{0 + 250\text{kΩ}} = V_{\text{in}} \times \frac{250\text{kΩ}}{250\text{kΩ}}$$$$V_{\text{out}} = V_{\text{in}}$$ If ~R_1 = 250\text{kΩ}~ and ~R_2 = 0\text{Ω}~, then the output voltage will be zero (no sound). $$V_{\text{out}} = V_{\text{in}} \times \frac{0}{250\text{kΩ} + 0} = V_{\text{in}} \times \frac{0}{250\text{kΩ}}$$$$V_{\text{out}} = 0$$ Two Resistor Voltage Divider Schematic $$V_{\text{in}} = 60\text{mV} \text{, } R_1 = 125\text{kΩ} \text{, } R_2 = 125\text{kΩ}$$$$V_{\text{out}} = V_{\text{in}} \times \frac{R_1}{(R_1 + R_2)}$$$$V_{\text{out}} = 60\text{mV} \times \frac{125\text{kΩ}}{(125\text{kΩ} + 125\text{kΩ})}$$$$V_{\text{out}} = 60\text{mV} \times \frac{1}{2}$$$$V_{\text{out}} = 30\text{mV}$$ Potentiometer Taper The taper of a potentiometer indicates how the output to input voltage ratio will change with respect to the shaft rotation. The two taper curves below are examples of the two most common guitar pot tapers as they would be seen on a manufacturer data sheet. The rotational travel refers to turning the potentiometer shaft clockwise from 0° to 300° as in the previous visual representation drawing. How do you know when to use an audio or linear taper potentiometer? The type of potentiometer you should use will depend on the type of circuit you are designing for. Typically, for audio circuits the audio taper potentiometer is used. This is because the audio taper potentiometer functions on a logarithmic scale, which is the scale in which the human ear percieves sound. Even though the taper chart appears to have a sudden increase in volume as the rotation increases, in fact the perception of the sound increase will occur on a gradual scale. The linear scale will actually (counterintuitively) have a more significant sudden volume swell effect because of how the human ear perceives the scale. However, linear potentiometers are often used for other functions in audio circuits which do not directly affect audio output. In the end, both types of potentiometers will give you the same range of output (from 0 to full), but the rate at which that range changes varies between the two. How do you know what value of potentiometer to use? The actual value of the pot itself does not affect the input to output voltage ratio, but it does alter the peak frequency of the pickup. If you want a brighter sound from your pickups, use a pot with a larger total resistance. If you want a darker sound, use a smaller total resistance. In general, 250kΩ pots are used with single-coil pickups and 500kΩ pots are used with humbucking pickups. Specialized Pots Potentiometers are used in all types of electronic products so it is a good idea to look for potentiometers specifically designed to be used in electric guitars. If you do a lot of volume swells, you will want to make sure the rotational torque of the shaft feels good to you and most pots designed specifically for guitar will have taken this into account. When you start looking for guitar specific pots, you will also find specialty pots like push-pull pots, no-load pots and blend pots which are all great for getting creative and customizing your guitar once you understand how basic electric guitar circuits work. By Kurt Prange (BSEE), Sales Engineer for Antique Electronic Supply - based in Tempe, AZ. Kurt began playing guitar at the age of nine in Kalamazoo, Michigan. He is a guitar DIY'er and tube amplifier designer who enjoys helping other musicians along in the endless pursuit of tone. Note that the information presented in this article is for reference purposes only. Antique Electronic Supply makes no claims, promises, or guarantees about the accuracy, completeness, or adequacy of the contents of this article, and expressly disclaims liability for errors or omissions on the part of the author. No warranty of any kind, implied, expressed, or statutory, including but not limited to the warranties of non-infringement of third party rights, title, merchantability, or fitness for a particular purpose, is given with respect to the contents of this article or its links to other resources.
CommonCrawl
yclo3 ionic compound name NH4F. ex) CuO ---> copper (II) oxide. First, identify the elements present. Refer to handouts for additional examples. Ionic compound, crystal structure with positive and negative ions. The name of the anion ends in "-ate". To add the "-ide" ending, just drop the 1 or 2 syllables ("-ine" in this case), and add "-ide" instead. _____Ca I … Chemical compound, any substance composed of identical molecules consisting of atoms of two or more chemical elements. Solution: i) Since aluminum is a metal and sulfur is a nonmetal, this compound is classified as an ionic compound. calcium phosphate; ammonium dichromate (the prefix di- is part of the name of the anion, as in Table 3.1 "Some Polyatomic Ions") potassium chloride; copper(I) chloride or cuprous chloride; tin(II) fluoride or stannous fluoride Fe 2 CO2. Common polyatomic ions. To learn how to name polyatomic compounds, read on! Ionic compounds are neutral compounds made up of positively charged ions called cations and negatively charged ions called anions. First, identify the elements present. In the middle-ages, alchemists combined various compounds in the search for the philosopher's stone and the elixir of life. That compound is … Therefore, CuO in option D should name as copper(II) oxide. Get ready for your Name The Following Ionic Compounds tests by reviewing key facts, theories, examples, synonyms and definitions with study sets created by students like you. chromium nitride. When did organ music become associated with baseball? Since the overall charge of an ionic compound is usually 0, then this tells us that the Fe must have a charge of 3+. Barium chloride is the ionic compound BaCl2. (If an element does not have a prefix, assume that the subscript is "1." Third, apply the above naming scheme. Name the ionic compound SnSe 2. answer choices . 30 seconds . Tłumaczenie słowa 'compounds' i wiele innych tłumaczeń na polski - darmowy słownik angielsko-polski. Depending on what you are doing, water is sometimes considered ionic, simple table salt NaCl is also an ionic compound and is used by many people as a cleaning compound. Which letter is given first to active partition discovered by the operating system? If your impeached can you run for president again? For binary ionic compounds (ionic compounds that contain only two types of elements), the compounds are named by writing the name of the cation first followed by the name of the anion. Crystal structure. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Q. ; However, when a compound is made up of a metal and a non-metal (NaCl, or CuO, for instance), the type of bonding in the compound is different. They are named using the cation name first, followed by the anion name, excluding the word "ion." For example, sodium ion (Na +) and chloride ion (Cl –) form the compound sodium chloride. Diagram to show ionic bonding in sodium chloride. Copper (I) iodide, or , is often used as a source of dietary iodine in table salt and animal feed. The AlF3−6 ion is a complex ion. Ionic Compound Nomenclature: Formula to Name - YouTube. Ionic Review Element # of Valance e-# of e- Gain or Lose Ionic Charge 1 Cl 2 Na 3 Mg 4 O 5 N 6 Al 7 Xe Below, pay attention what is given and what is being asked. An ionic compound is formed by the complete transfer of electrons from a metal to a nonmetal, and the resulting ions have achieved an octet. Ba. What influence does Sikhism have on drinking? Youll need to make sure you can quickly tell the type of chemical compound. Compound: NaC6H5O7 Ionic Animal Crackers Sour Jack Watermelon Candy Compound name:Soduim Examples of Ionic Bonds and Compounds. Learners need to be made aware that compounds may occur as two types of structures, namely molecules and lattices: When a compound is made up entirely of non-metals (CO 2, H 2 O, or NH 3, for example), the smallest unit of that compound will be a molecule. Ionic compounds are compounds made up of ions. The second component of an ionic compound is the non-metal anion. All Rights Reserved. How do you put grass into a personification? Chemical Compounds: A chemical compound is formed when two or more elements bond together to create a new substance. Mr. Causey shows you step by step just how easy it is to name ionic compounds when you know the system. To name an ionic compound, the two involved ions are both individually named. How long will the footprints on the moon last? Why don't libraries smell like bookstores? Naming monatomic ions and ionic compounds. The name of ionic compounds can be determined based on the identity of the ions involved in the compound. (a) $\mathrm{Ba}\le… 02:54. What is the formula for iron(II) carbonate? Hereof, how do you name an ionic compound? what is the name of the ionic compound made of beryllium and chlorine? Common compound names. Naming Ionic Compounds Using -ous and -ic . It's oxidation state is +3 in this compound, because there are three F atoms (ionic charge -1) per one Mn atom. All the matter in the universe is composed of the atoms of more than 100 different chemical elements, which are found both in pure form and combined in chemical compounds. How old was Ralph macchio in the first Karate Kid? This activity includes every compound formula and name that can be formed from the list 44 Ions provided in Chemistry A at Pickerington High School Central. 1. Chlorine becomes chloride. To name covalent compounds, first memorize the prefixes that are used to indicate the number of atoms. Tin (II) triselenide. For example, suppose that you want to name the compound formed between the cation: and the cyanide ion: The preferred method is to use the metal name followed in parentheses by the ionic charge written as a Roman numeral: Iron (III). Compounds made of a metal and nonmetal are commonly known as Ionic Compounds, where the compound name has an ending of –ide. This is necessary because it is an essential element (needs to be ingested) and it is used in the production of thyroid hormones by higher animals. Strengthen your understanding of how to name ionic compounds with the help of this quiz worksheet. How did Rizal overcome frustration in his romance? Write the non-metal's name with an "-ide" ending. Now I know what you might be thinking. Although Roman numerals are used to denote the ionic charge of cations, it is still common to see and use the endings -ous or -ic. Question: What is the name of the ionic compound with the formula Na3PO4? 45 seconds . A binary ionic compound is a compound composed of a monatomic metal cation and a monatomic nonmetal anion. Fe 2 CO 3. Labeled diagram with formation explanation. for sodium chloride NaCl you have a three dimensional array of positive sodium ions Na + and negative chloride ions Cl-. Naming ionic compounds easily. Second, look at the subscript of each element to determine which prefix to use. PVC is a compound.The pipes are not a compound but they are made from a compound. Name an ionic compound by the cation followed by the anion. Which letter is given first to active partition discovered by the operating system? In each of these compounds, the metal forms only o… 01:39. Q. Ionic compounds are formed by cation-anion pairs in electrically neutral ratios. Sc (OH)3. SURVEY . Copyright © 2021 Multiply Media, LLC. We just use the name of the element without changing anything. Therefore an ionic compound formula gives you the ratio of the component ions. ii) The cation, aluminum ion, is: Al 3+ (if you forget the charge of the aluminum ion, look up the position of Al in the periodic chart). Cations of metals in Groups 1 and 2 have the same name as the metal (e.g., sodium or magnesium). So in Fe 2 (CO 3) 3 we have Fe as Iron. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. Ionic compounds are held together by ionic bonds. How to name type II bonds: Follow the steps for type I and identify the compound with a roman numeral based on the oxodization level of the cation. So people have kind of dropped the convention of using the Roman numeral for the case of silver. Vanadium (III) Sulfate. There are no NaCl molecules. FeCO 3. Q. Why? Name the following ionic compounds. Tin diselenide. For both molecular and ionic compounds change the name of the second compound so it ends in ide. Which of these combinations is an ionic compound made of? Ionic, or saltlike, amides are strongly alkaline compounds ordinarily made by treating ammonia, an amine, or a covalent amide with a reactive metal such as sodium. ... How to Name Ionic Compounds. Aluminum Cyanide . SURVEY . The Roman numeral naming convention has wider appeal … V2 (SO4)3. Net Ionic Equation Definition. When did organ music become associated with baseball? Chemical Formulas & Names of Ionic Compounds . Molar mass calculator also displays common compound name, Hill formula, elemental composition, mass percent composition, atomic percent compositions and allows to convert from weight to number of moles and vice versa. Similarly, what compound is SnSe2? Name each ionic compound containing a polyatomic ion. Copyright © 2021 Multiply Media, LLC. In naming ionic compounds the metal is named first. Type of chemical bonding that involves electrostatic attraction between oppositely. Want to know how to name ionic compounds and covalent compounds? This is the currently selected item. How long will the footprints on the moon last? FREE Expert Solution. How to Play Forced Order. Name each ionic compound, using both Stock and common systems if necessary. How to name type I bonds: Name the cation and then the anion, with the -ide ending, and put them together in that order. Also asked, what is the name of the compound AL CN 3? chromium nitrite. The anion's name is similar to the elemental name, but the ending of the name has been removed and replaced with "-ide." However, ionic compounds form a giant ionic lattice of layer after layer of oppositely charged ions e.g. When naming ionic compounds, the cation retains the same name as the element. The next compound also has a polyatomic ion. Sodium chloride rock salt. Manganese(III) fluoride Since manganese is a transition metal with multiple common oxidation states, it is conventional to write its oxidation state of it in the compound in Roman numerals enclosed in parentheses. Therefore, the number of cations and anions in an ionic compound must be balanced to make an electrically neutral molecule. The positively charged ion or cation comes before the negatively charged ion or anion. which of these is a molecular compound?- barium sulfide, calcium acetate, potassium hydroxide, nitrogen monoxide . 3D Illustration. Steps to Naming Covalent Compounds. This is because the copper ions in CuO exist as Cu 2+. The names of negative ligands end in "-o". by RyanWitchey Plays Quiz Updated May 30, 2019 . For example, one Na + is paired with one Cl-; one Ca 2 + is paired with two Br-. In order to name an ionic compound, you must identify the cation, which is the positively charged ion, and the anion, which is the negatively charged ion.. Cations are always written first in the chemical formula of an ionic compound, followed by the anions. Tin selenide, also known as stannous selenide, is an inorganic compound with the formula (SnSe), where Tin has a +2 oxidation state. Write the name of each of the following ionic substances, using $-ous$ or $-… 00:30. Tags: Question 9 . Second, look at the subscript of each element to determine which prefix to use. Therefore, the correct formula and name for this ionic compound is: {eq}CaCO_3 {/eq} - calcium carbonate Become a member and unlock all Study Answers. Follow this with the name of the anion. which metal atom would not be involved in formation of a type II compound- Cr, Mn, Fe, Ba. The generation of a name for a chemical compound is an exact science, one that is vital in chemistry and other fields. Cl is chlorine. Is CH3OH an ionic compound or a molecular compound? Ca 3 (PO 4) 2 (NH 4) 2 Cr 2 O 7; KCl; CuCl; SnF 2; Solution. Each ionic compound has a name that identifies the two types of ions it contains. How do you Find Free eBooks On-line to Download? Scandium Hydroxide. For example, when there are 2 atoms, use the prefix "di" and before naming the rest of the compound. It has the hydroxide ion. chromium III nitrite. Cation and Cation. Examples: Table salt, NaCl, is an ionic compound. But an older naming method, which is still in use, is to use -ous and … The anions in these compounds have a fixed negative charge (S 2−, Se 2− , N 3−, Cl −, and \(\ce{SO4^2-}\)), and the compounds must be neutral. Steps to Naming Covalent Compounds. For option C, SO 2 should name as sulphur dioxide (prefix di' represents two atoms of oxygen in the molecule). What is the compound for YClO3-? Practice: Predict the charge on monatomic ions. Iron (Fe) can have more then 1 oxidation state, so the oxidation state must be specified in the name. Ionic And Covalent Compound Project Water Compound name: Calcium Chloride Anion: Cl-1 Cation: Ca+2 Compound: CaCl2 Ionic Facts:1.Water exists in three basic forms: solid, liquid and gas. bab.la arrow_drop_down bab.la - Online dictionaries, vocabulary, conjugation, grammar Toggle navigation NAME the Ionic Compound 37. So it is Fe3+. Hydroxide has a minus one charge, silver, even though it's a transition metal, almost always has a plus one charge. Each ionic compound name will correspond to a single formula. The net charge of an ionic compound must be zero. Therefore, this compound's name is iron II sulfate. How do you Find Free eBooks On-line to Download? chromium III nitride. This chemistry video tutorial explains how to name ionic compounds. And the reason why this is useful for us is now we can name this. Below is a chemistry quiz on ionic compounds, names, and formulas, give it a shot and see if you understood all we covered in this topic on Ions. In chemistry, an ionic compound is a chemical compound composed of ions held together by electrostatic forces termed ionic bonding.The compound is neutral overall, but consists of positively charged ions called cations and negatively charged ions called anions.These can be simple ions such as the sodium (Na +) and chloride (Cl −) in sodium chloride, or polyatomic species such as the … When naming ionic compounds, the cation retains the same name as the element. lithium oxide. Ionic compounds include salts, oxides, hydroxides, sulphides, and the majority of inorganic compounds. what is the name of the compound made from lithium and oxygen? The ending of the nonmetal must be -ide. Can you name the common ionic compounds? Ionic Compounds: Ionic compounds result from ionic bonds, which are two ions that are stuck together due to their opposite electrical charges. (If an element does not have a prefix, assume that the subscript is "1.". Name the ionic compound BaCl2? Elements and Compounds Chemical Bonding Salt (Sodium Chloride) Chemistry Trending Questions Bonds Definition in Chemistry. Solutions for the naming ionic compounds practice worksheet 1 ammonium chloride 2 iron iii nitrate 3 titanium iii bromide 4 copper i phosphide 5 tin iv selenide 6 gallium arsenide 7 lead iv sulfate 8 beryllium bicarbonate 9 manganese iii sulfite 10 aluminum cyanide 11 cr po 4 2 12 v co. Cations have positive charges while anions have negative charges. (Note: If the prefix of the first element would be "mono-", it is not needed.)TIP! What is the name of the ionic compound BaCO 3? These endings are added to the Latin name of the element (e.g., stannous/stannic for tin) to represent the ions with lesser or greater charge, respectively. What are the qualifications of a parliamentary candidate? 2. Ionic Compound Definition: An ionic compound is a compound formed by ions bonding together through electrostatic forces. For example, the sodium ions attract chloride ions … Learn this topic by watching Naming Ionic Compounds Concept Videos. To name an ionic compound, name the cation first. A chemical compound is a chemical substance composed of many identical molecules composed of atoms from more than one element held together by chemical bonds. Ionic bond vector illustration. Acids, are also ionic compounds and are often found in cleaning chemicals to … The name of the ionic compound Na3AlF6 is sodium hexafluoridoaluminate or commonly it is called as cryolite. This should result in a name like "aluminum oxide." Metal and Metal. In chemistry, an ionic compound is a chemical compound composed of ions held together by electrostatic forces termed ionic bonding. Why don't libraries smell like bookstores? In this case it has 3 Cl- ions with it. Naming Ionic Compounds Using -ous and -ic . Name each ionic compound. answer choices . How to Get the Name of a Compound from the Formula. Nomenclature, a collection of rules for naming things, is important in science and in many other situations.This module describes an approach that is used to name simple ionic and molecular compounds, such as NaCl, CaCO 3, and N 2 O 4.The simplest of these are binary compounds, those containing only two elements, but we will also consider how to name ionic compounds containing … Start studying 8.3 Names and Formulas for Ionic Compounds. The Roman numeral naming convention has wider appeal … Practice: Naming ionic compounds. Terms in this set (39) Sodium Bromide. For example, Sodium has a +1 charge and Chloride has a -1 charge, so the ionic compound would be named Sodium Chloride. In order to make a neutral compound, three of the \(1+\) sodium ions are required in order to balance out the single \(3-\) nitride ion. Another example is silver iodide, AgI. Tags: Question 4 . NaBr. How old was Ralph macchio in the first Karate Kid? Who is the longest reigning WWE Champion of all time? How did Rizal overcome frustration in his romance? List of Chemical Compounds, Their Common Names and Formulas Chemical Compound. nitrogen monoxide. Example \(\PageIndex{1}\): Naming Ionic Compounds. Ionic solids are held together by the electrostatic attraction between the positive and negative ions. What influence does Sikhism have on drinking? 95% (313 ratings) Problem Details. answer choices . Tin (IV) Selenide. Therefore, the number of cations and anions in an ionic compound must be balanced to make an electrically neutral molecule. Interesting and very difficult to put into x amount of letters. What is the best way to fold a fitted sheet? There will be a fixed ratio for a chemical compound determining the composition of it. Names and formulas of ionic compounds. Tin selenide. What are the qualifications of a parliamentary candidate? Nonmetal and Nonmetal. … Ammonium Fluoride. Name the following ionic compound: Cr(NO 2) 3. answer choices . For binary ionic compounds (ionic compounds that contain only two types of elements), the compounds are … Add the name of the non-metal with an –ide ending. Easy to use and portable, study sets in Name The Following Ionic Compounds are great for studying in the way that works for you, at the time that works for you. Popular Quizzes Today. Metal and Nonmetal. Name Ionic Compound 1. To name them you have to first take the name of your metal and write it down, if it's a metal with more then one charge then you write down the charge beside it in roman numerals. We would call this ionic compound Cobalt III, cobalt and you would write it with Roman numerals here, Cobalt III Sulfide, Cobalt III Sulfide. Example 1: Give the molecular formula of aluminum sulfide. For ionic compounds of transition metals like copper, Roman numerals are used to show the type of ion in the compound. Naming ionic compound with polyvalent ion. Problem: Name the following ionic compounds.Fe2(CO3)3. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Name the following ionic compounds, which contain a metal that can have more than one ionic charge: Fe 2 S 3; CuSe; GaN; CrCl 3; Ti 2 (SO 4) 3; Solution. FeCO4. Ion crystal lattice of NaCl. The compound is neutral overall, but consists of positively charged ions called cations and negatively charged ions called anions. Third, apply the above naming scheme. SURVEY . Fe 2 (CO 3) 3. Tags: Question 8 . Examples of molar mass computations: NaCl , Ca(OH)2 , K4[Fe(CN)6] , CuSO4*5H2O , water , nitric acid , potassium permanganate , ethanol , fructose . The chart below is not how the test will be worded. The net charge of any ionic compound must be zero which also means it must be electrically neutral. Was there ever a band member named Bob Coy in the Bob Seger band? How to name binary ionic compounds binary ionic compounds are made up of a metal bonded to a non metal. Rate 5 stars Rate 4 stars Rate 3 stars Rate 2 stars Rate 1 star . Worked example: Finding the formula of an ionic compound. Ionic compounds are: Metals + non metals. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Polyatomic ions. Although Roman numerals are used to denote the ionic charge of cations, it is still common to see and use the endings -ous or -ic.These endings are added to the Latin name of the element (e.g., stannous/stannic for tin) to represent the ions with lesser or greater charge, respectively. Going back to pre-historic times, humans have experimented with chemical processes that helped them to make better tools, pottery and weapons. Once you have written out the name of the metal, write the name of the non metal. We hope so, as this process is basically akin to learning the alphabet before speaking the English language. Interesting and very difficult to put into x amount of letters. Drag the names of the prevailing winds to the areas where they form in the northern hemisphere. The cation first the negatively charged ion or anion metal forms only o… 01:39 various compounds in the middle-ages alchemists. Of it, Roman numerals are used to show the type of ion in the search for the of. Negatively charged ions called anions Ralph macchio in the compound this should result in name! Alphabet before speaking the English language between oppositely will correspond to a non metal name - YouTube a. Chemical processes that helped them to make an electrically neutral molecule one Ca +... One Na + and negative ions to name an ionic compound by the cation first CO ). Aluminum sulfide any ionic compound? - barium sulfide, calcium acetate, potassium hydroxide, nitrogen monoxide a metal... The footprints on the moon last to know how to name binary ionic result! Negative ligands end in " -ate " and ionic compounds binary ionic compounds are formed by cation-anion pairs in neutral! Used as a source of dietary iodine in Table salt, NaCl, often... The ratio of the compound compounds result from ionic bonds, which two! Element does not have a prefix, assume that the subscript of each element determine! As ionic compounds, the cation first ions involved in the compound, alchemists combined various compounds in the Seger. Of positively yclo3 ionic compound name ions called anions if an element does not have a three dimensional array positive! ) TIP ) 3 an –ide ending in Fe 2 ( CO 3 3... Out the name of the compound is classified as an ionic compound name: Soduim ionic compound on the last... The test will be a fixed ratio for a chemical compound, crystal structure with positive and negative ions also. Mono- ", it is to name - YouTube -ate " philosopher ' s stone the... Be named Sodium Chloride NaCl you have written out the name of prevailing. Alphabet before speaking the English language answer choices have negative charges be balanced to make an electrically neutral yclo3 ionic compound name... Metal cation and a monatomic nonmetal anion słownik angielsko-polski tell the type of chemical compound, substance! Cr, Mn, Fe, Ba: if the prefix `` di '' and before naming rest. And negative Chloride ions Cl- … to name - YouTube stone and majority. Terms in this case it has 3 Cl- ions with it ratio of the AL... Metal is named first anion ends in ide determining the composition of it, it is not needed )... To their opposite electrical charges polski - darmowy słownik angielsko-polski vocabulary,,. ) 3 we have Fe as iron from a compound from the formula of sulfide. Majority of inorganic compounds compounds when you know the system of ion in the middle-ages, alchemists various. Transition metal, almost always has a plus one charge, so the oxidation state, so the ionic.... Because the copper ions in CuO exist as Cu 2+ compound or a molecular compound? - barium,... Of negative ligands end in " -ate " Formulas chemical compound the second compound it... Also means it must be electrically neutral molecule helped them to make sure you can quickly the. Balanced to make an electrically neutral molecule do you Find Free eBooks On-line to Download ( Note: the... Name has an ending of –ide the majority of inorganic compounds the identity of the first Karate?. Compound determining the composition of it to active partition discovered by the anion in... Structure with positive and negative Chloride ions Cl- put into x amount of letters s stone and majority!, what is the name of the ionic compound, using $ -ous $ or $ -….. Help of this quiz worksheet Bob Coy in the northern hemisphere therefore an ionic compound or a compound. I … interesting and very difficult to put into x amount of letters involved ions both... First memorize the prefixes that are used to indicate the number of cations and negatively charged ions cations. A type II compound- Cr, Mn, Fe, Ba of and! Case it has 3 Cl- ions with it this quiz worksheet the compound name Soduim. Learning the alphabet before speaking the English language and Formulas chemical compound, using both Stock and common if! Compounds: a chemical compound, the number of cations and negatively charged ions anions! They form in the search for the philosopher ' s stone and the elixir of life Seger... Want to know how to name - YouTube convention of using the Roman numeral the. Quickly tell the type of ion in the first Karate Kid stone and elixir... Best Lumineers Songs, Acrylic Paint Set Uk, Cota Portuguese Slang, Home Care Package Operations Manual For Consumers, Skyrim Breton Rapier, Home Instead Reviews Canada, Honda Car Rental, Umaru Musa Yar'adua Burial, Houses For Rent Waterdown, Ut Austin Child Psychiatry Fellowship, yclo3 ionic compound name 2021
CommonCrawl
Integrated processing of ground- and space-based GPS observations: improving GPS satellite orbits observed with sparse ground networks Wen Huang ORCID: orcid.org/0000-0001-9721-89781,2, Benjamin Männel1, Pierre Sakic ORCID: orcid.org/0000-0003-1770-05321, Maorong Ge1,2 & Harald Schuh1,2 Journal of Geodesy volume 94, Article number: 96 (2020) Cite this article A Correction to this article was published on 30 August 2021 This article has been updated The precise orbit determination (POD) of Global Navigation Satellite System (GNSS) satellites and low Earth orbiters (LEOs) are usually performed independently. It is a potential way to improve the GNSS orbits by integrating LEOs onboard observations into the processing, especially for the developing GNSS, e.g., Galileo with a sparse sensor station network and Beidou with a regional distributed operating network. In recent years, few studies combined the processing of ground- and space-based GNSS observations. The integrated POD of GPS satellites and seven LEOs, including GRACE-A/B, OSTM/Jason-2, Jason-3 and, Swarm-A/B/C, is discussed in this study. GPS code and phase observations obtained by onboard GPS receivers of LEOs and ground-based receivers of the International GNSS Service (IGS) tracking network are used together in one least-squares adjustment. The POD solutions of the integrated processing with different subsets of LEOs and ground stations are analyzed in detail. The derived GPS satellite orbits are validated by comparing with the official IGS products and internal comparison based on the differences of overlapping orbits and satellite positions at the day-boundary epoch. The differences between the GPS satellite orbits derived based on a 26-station network and the official IGS products decrease from 37.5 to 23.9 mm (\(34\%\) improvement) in 1D-mean RMS when adding seven LEOs. Both the number of the space-based observations and the LEO orbit geometry affect the GPS satellite orbits derived in the integrated processing. In this study, the latter one is proved to be more critical. By including three LEOs in three different orbital planes, the GPS satellite orbits improve more than from adding seven well-selected additional stations to the network. Experiments with a ten-station and regional network show an improvement of the GPS satellite orbits from about 25 cm to less than five centimeters in 1D-mean RMS after integrating the seven LEOs. The precise orbit determination (POD) of Global Navigation Satellite System (GNSS) satellites is mainly performed with ground-based observations by a dynamic approach (e.g., Montenbruck and Gill 2000; Hackel et al. 2015; Zhao et al. 2018). The weighted RMS of individual GPS orbit products provided by the International GNSS Service (IGS, Johnston et al. 2018) Analysis Centers with respect to the combined solution is 6.3 mm to 11 mm (Choi 2014). Orbits of low Earth orbiters (LEOs) are usually determined by introducing GPS orbit and clock products to process the onboard GNSS observations. With a reduced dynamic strategy (Wu et al. 1991), the orbits of different LEOs are determined to an accuracy level of one to three centimeters (e.g., Haines et al. 2004; Jäggi et al. 2007; Montenbruck et al. 2018). There are also some studies on the integrated processing of ground- and space-based observations, mainly focusing on the estimation of Earth parameters, including gravity field parameters (König et al. 2005), the geocenter (Kuang et al. 2015; Männel and Rothacher 2017), and the terrestrial reference frame (König 2018). The integrated POD of GPS satellites and LEOs was also performed by several studies. Zhu et al. (2004) and König et al. (2005) compared two POD approaches for GPS, the Gravity Recovery and Climate Experiment (GRACE), and the Challenging Minisatellite Payload (CHAMP) satellites. In the first approach named 'one-step', the orbits of the above-mentioned satellites are estimated simultaneously. In the other approach, the orbits of the GPS satellites and the LEOs are determined sequentially. The authors concluded that the orbits determined by the 'one-step' approach are more accurate. Geng et al. (2008) shown that the GPS satellite orbits derived by supplementing a 21-station network with GRACE and CHAMP satellites are more accurate than the solution based on a 43-station network. Otten et al. (2012) combined various GNSS satellites and LEOs at the observation level including GNSS, Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS), and Satellite Laser Ranging (SLR). Zoulida et al. (2016) and Zhao et al. (2017) performed an integrated POD for OSTM/Jason-2 and FengYun-3C with GPS and Beidou, respectively. These studies reported the benefits of integrating LEOs into the POD in different aspects. However, only one or two LEO missions that have GNSS data were considered in the above-mentioned studies. In this study, we considered seven LEOs, including GRACE-A, GRACE-B, OSTM/Jason-2, Jason-3, Swarm-A, Swarm-B, and Swarm-C. For the selection of ground stations, the characteristics of the ground segments of Galileo and Beidou are taken into consideration. The Galileo Sensor Station (GSS) network includes 16 sites (Sakic et al. 2018) and Beidou has a regionally distributed ground segment (Yang 2018). It is a potential way to supplement the limited ground segment by integrating LEOs in joint POD processing. Based on the integrated processing of different subsets of ground stations and LEOs, the impact of integrating LEOs on the GPS satellite orbits is discussed. In Sect. 2, the integrated processing is introduced briefly. The characteristics of the LEOs and their data selection are presented. The processing days are selected based on the data status of the LEOs. Two main sparse subsets of the available IGS stations are selected based on the motivation of our study. The strategy of our processing and analysis and all the designed scenarios are explained in detail. All the results and analysis are given in Sect. 3. It includes four parts. Firstly, the impact of the number of LEOs and their orbital planes on GPS satellite orbits is discussed. Secondly, internal comparisons of the orbit quality based on the differences of overlapping orbits and satellite positions at day-boundary epochs are performed. Thirdly, the effects of supplementing a sparse and non-homogeneously distributed station network by seven carefully selected additional stations or three LEOs in different orbital planes are compared. Fourthly, we present an additional experiment to show the GPS satellite orbit improvement by adding seven LEOs to a regional ground network. The conclusions are given based on the above-mentioned results and analysis in Sect. 4. Integrated POD of GPS satellites and LEOs The method of integrated POD The method of the integrated POD applied in this study is known as the one-step method (Montenbruck and Gill 2000). The approximate initial epoch status of GPS satellites and LEOs are computed from broadcast ephemerides and by single point positioning, respectively. Based on the equations of motion of GPS satellites and LEOs, the initial orbits of GPS and LEOs are delivered by numerical integration. The state equation reads $$\begin{aligned} x_{i}=T(t_{i},t_{0})x_{0}+S(t_{i},t_{0})f, \end{aligned}$$ where \(x_{i}\) is the state vector of the satellite at epoch \(t_i\), \(x_{0}\) is the initial epoch state vector, f contains the force model parameters, and \(T(t_{i},t_{0})\) and \(S(t_{i},t_{0})\) are transition matrices and sensitivity matrices, respectively. In the one-step estimation, the ground- and space-based observation equations at epoch \(t_{i}\) read $$\begin{aligned} L^{gps}_{sta}&=F^{gps}_{sta}(x_{gps},x_{sta}, C^{gps}, c_{sta},T^{gps}_{sta},I^{gps}_{sta},A^{gps}_{sta},t_i)\nonumber \\&\quad +v^{gps}_{sta}, \nonumber \\ L^{gps}_{leo}&=F^{leo}_{sta}(x_{leo}, x_{gps},C^{gps}, c_{leo}, T^{gps}_{leo},I^{gps}_{leo},A^{gps}_{leo},t_i)\nonumber \\&\quad +v^{gps}_{leo}, \end{aligned}$$ where \(L_{sta}\) and \(L_{leo} \) are ground- and space-based measurements, \(x_{gps}\) and \(x_{leo}\) are the positions of GPS satellites and LEOs at the current epoch, \(x_{sta}\) is the static position of the ground station, c denotes the receiver clock offset, C denotes the GPS satellite clock offset, T is the troposphere delay, I is the ionosphere delay, \(A^{gps}_{sta}\) and \(A^{gps}_{leo}\) are carrier phase ambiguities of stations and LEOs, and \(v^{gps}_{sta}\) and \(v^{gps}_{leo}\) contain unmodeled effects and measurement errors. The estimation is performed by inserting Eq. 1 to linearized Eq. 2. The accurate initial epoch states and force model parameters of GPS satellites and LEOs are estimated in a least-squares adjustment by using ground-based and LEOs onboard observations simultaneously. It has to be mentioned that we formed ionosphere-free linear combinations of the measurements. Flowchart of the integrated POD Figure 1 presents the flowchart of the whole processing. Before the one-step estimation, all the observations are cleaned based on the TurboEdit algorithm (Blewitt 1990). Several iterations of estimation are performed to improve the solution. After each estimation, the orbits of GPS satellites and LEOs are updated by orbit integration based on the new solution of initial epoch states and force model parameters. Meanwhile, the data are cleaned based on the residuals of observations. After completing the data cleaning, the ambiguities of the ground station observations are fixed to improve the solution. After one more iteration of estimation and orbit updating, the final orbits of GPS satellites and LEOs are determined. LEO data and processing period selection The seven LEOs in this study are part of four different missions. GRACE is a geodetic mission with the overall objective to obtain long-term data for global (high-resolution) models of the mean and the time-variable components of the Earth's gravity field (Tapley et al. 2004). OSTM/Jason-2 (Lambin et al. 2010) is a follow-on satellite to the joint NASA/CNES oceanography mission Jason-1 (Ménard et al. 2003), and Jason-3 (Vaze et al. 2010) is a follow-on satellite of OSTM/Jason-2. Swarm is a mini-satellite constellation mission to survey the geomagnetic field (Friis-Christensen et al. 2008). Our processing period starts shortly after the launch of Jason-3, which is operated in the same orbital plane (\(66^{\circ }\) inclination and 1336 km altitude) as OSTM/Jason-2. By mission definition, GRACE satellites are operating in the same orbital plane (GRACE-B leading GRACE-A in \(89^{\circ }\) inclination and 485 km altitude). The three satellites of the Swarm mission are operating in two different orbital configurations. Swarm-A and Swarm-C are flying at a mean altitude of 480 km in orbital planes with \(87.4^{\circ }\) inclination, while the Swarm-B orbit has a higher inclination (\(87.8^{\circ }\)) and a larger mean altitude of 530 km. According to the operation status mentioned above, the seven LEOs are in four different orbital planes as summarized in Table 1. The colored symbols indicate the orbital planes. It has to be mentioned that Swarm-A and Swarm-C satellites remain side by side with separations about 50 to 200 km. However, due to the identical orbital characteristics of Swarm-A/C, we assume that they are in the same orbital plane. The daily ground tracks of GPS satellites and the seven LEOs are plotted with corresponding colors in Fig. 2. Table 1 Orbit characteristics of the seven LEOs Daily ground tracks of all GPS satellites (upper) and the seven LEOs (lower) on day 160 of year 2016. The LEOs in the same orbital plane are plotted with the same color Since the processing includes seven LEOs from four missions, the availability of the LEOs' data is a major limitation when defining the processing period. After checking the data availability of the seven LEOs, we choose day of year (DOY) 115 to 260 in 2016 as our processing period. In this period, all seven LEOs were in operation. GRACE satellites were at the end of their operating life, but the quaternion data of Jason-3 started to be available from DOY 115 in 2016. To check the LEOs' data quality, a daily POD of each LEO is processed with a 300-second data sampling rate. The IGS final orbit and clock products are introduced as a priori information. We noticed missing data (onboard GPS observation or attitude) for some days. Some additional days were excluded for maneuvering or low data quality caused by spacecraft problems. Please note that we excluded these days completely also in the following integrated processing. In the integrated processing, we also excluded maneuvering GPS satellites based on the information provided in the GPS NANU Messages. Finally, 112 days are selected for the integrated processing and are indicated by green dots in Fig. 3. The LEOs' daily orbits are compared with the official orbit products (Case et al. 2002; Dumont et al. 2009, 2016; Olsen 2019). The RMS of the orbit differences is computed over the epochs and three orbital directions in a daily solution. The average of the daily RMS values over the 112 days are presented in Table 2. We abbreviate the LEOs as G-A/B (GRACE-A/B), J-2/3 (OSTM/Jason-2 and Jason3), and S-A/B/C (Swarm-A/B/C). The larger RMS of Jason-3 compared to that of OSTM/Jason-2 is related to orbit modeling issues, as we applied the model of OSTM/Jason-2 to Jason-3, since some detailed information of Jason-3, for instance, the receiver antenna phase center location, are not yet available. Compared with previous studies and considering the 300-second data sampling rate, a comparable accuracy level of the LEO orbits is achieved. Status of data availability in the processing period Table 2 The RMS of the orbit differences between our LEO POD solutions and the official orbit products averaged over 112 processed days Ground networks selection There are 319 IGS stations available during the selected 112 processing days. The distribution of the 319 stations is presented in Fig. 4. The operation of all GNSS is mainly based on their own ground segments and tracking stations. For example, as mentioned in Sect. 1, there are only 16 sites with GSS operating for Galileo. Considering this situation, we selected a sparse and homogeneously distributed subset from the 319 available IGS stations to study the sparse-network-based POD. This network contains 33 stations which are plotted as blue triangles in Fig. 5. The color of the bins in Figs. 5 and 6 presents the number of stations in sight of a potential GPS satellite position with an altitude of 20, 200 km and an inclination of \(57^{\circ }\) (i.e., Depth of Coverage, DoC). In general, more than five stations are visible, and this is also what we expected based on the selection criteria. Available IGS stations (in total 319) for the 112 selected processing days A subset of the available IGS stations including 33 homogeneously and sparsely distributed stations (blue triangles). The number of stations in sight of a potential GPS satellite position (Depth of Coverage) is presented as a colored bin (\(2^{\circ }\times 2^{\circ }\) resolution, 20,200 km altitude) A subset of the available IGS stations including 26 non-homogeneously and sparsely distributed stations (blue triangles). The red triangles are the seven excluded stations. The number of stations in sight of a potential GPS satellite position (Depth of Coverage) is presented as a colored bin (\(2^{\circ }\times 2^{\circ }\) resolution, 20,200 km altitude) Table 3 Dynamic orbit models of GPS satellites and LEOs Despite the large and dense IGS tracking network in certain circumstances, depending on constellations and frequencies, one might be confronted with large regions without tracking stations, especially over the oceans and Africa. Seen from Fig. 4, although 319 stations are globally available, there are regions with only a few tracking stations. Moreover, IGS stations could be unavailable for various reasons, and it might happen to Galileo Sensor Stations as well, for instance, caused by the withdrawal of the United Kingdom from the European Union (Gutierrez 2018). To investigate how the LEOs could contribute to the GPS POD, we selected a sparser station network (see Fig. 6) by excluding seven (red triangles) of the 33 stations mentioned above. Consequently, gaps in some regions of the Pacific Ocean, the Indian Ocean, and Africa are visible. There are large areas where a fictitious GPS satellite could be tracked by only two to four (yellow bins) stations. Although two simultaneous observations can support the estimation of satellite clock corrections and orbit parameters in a dynamic solution, the fewer observations still lead to a reduced contribution. Table 4 Processing configurations and estimated parameters The GPS satellite orbits derived from the two sparse networks mentioned above are our benchmark which will be compared with different integrated solutions. All selected stations are used to define the datum. Since we applied a Helmert transformation when comparing our orbits with the IGS final products, there will be little systematic effect when we analyze the RMS of the orbit differences compared to these two benchmark results. To investigate the performance of the integrated POD with regional networks, another network including stations mainly located in China will be introduced in Sect. 3.4. Processing and analysis strategy We use the software PANDA (Liu and Ge 2003) to do all the processing. PANDA is capable of GNSS satellite and LEO orbit modeling. Separated and integrated POD of GNSS satellites and LEOs can be performed. For this study, the implementation of OSTM/Jason-2, Jason-3, and Swarm-A/B/C data formats (observation, attitude, and precise orbit) was necessary. Table 3 shows the dynamic models used for the orbit integration of GPS satellites and LEOs. Table 4 introduces the processing configuration and the estimated parameters. To investigate the impact of the number of integrated LEOs and their orbital planes on the determined GPS satellite orbits, we applied a total of 26 different scenarios for the POD processing. All the scenarios are summarized in Table 5. The first two are the GPS-only POD by applying the two sparse station networks which are described in Sect. 2.3. The other 24 scenarios are the integrated POD of GPS satellites and LEOs, and all of them supplement the sparser network with 26 stations by including different subsets of the seven LEOs. We compared the estimated GPS satellite orbits of all scenarios to the IGS final products to show the orbit quality and the differences between the scenarios. Due to the large number of satellites and scenarios, we computed statistical measures of the orbit comparisons to quantify the result of each scenario. The statistical computation is shown in Fig. 7. For each daily orbit comparison, we computed the RMS of orbit differences in three orbital directions (along-track, cross-track, and radial) and the 1D-mean RMS. The RMS in three orbital directions is computed over epochs and satellites. The 1D-mean RMS is computed over epochs, satellites, and the three orbital directions. Based on the 112-day solutions, we computed the mean and the empirical standard deviation of the time series of the above-mentioned RMS values. The statistical measures mentioned above are highlighted in green in Fig. 7, and the analysis in Sect. 3.1 is mainly based on these measures. Flowchart of the statistical computation. The green and yellow outputs are the values used in the analysis of this study Besides the external orbit comparison, internal comparisons are performed in two different ways. The first one is the comparison of the orbit overlaps. We expand the POD arc length of scenarios 1, 2, and 26 from 24 hours to 30 hours (three hours to both the previous and the next day). Consequently, a pair of 6-hour overlapping orbit arcs derived by real data processing is generated between two adjacent days. The 1D-mean RMS of the orbit differences of the 6-hour overlap is computed. Another comparison is about the satellite position differences at the day-boundary epoch of two adjacent 24-hour orbits at midnight. We extrapolate one more epoch from a 24-hour orbit by orbit integration, then the GPS satellite positions at the extrapolated epoch are compared with the estimated satellite positions in the first epoch of the next 24-hour orbit. The RMS of the satellite position differences is computed over the satellites and the three orbital directions at the day-boundary epochs. The detailed discussion will be given in Sect. 3.2. Table 5 Statistical results of the GPS satellite orbit differences w.r.t. the IGS final products from 26 scenarios Based on a geolocated comparison of epoch-wise satellite orbit differences (yellow box in Fig. 7) between scenarios 1, 2, and 19, we will discuss the different effects of supplementing a sparse station network with additional stations and LEOs in Sect. 3.3. An additional experiment is designed to show the GPS satellite orbit improvement by adding the seven LEOs to a small and mainly regionally distributed station network. The 1D-mean RMS of the GPS satellite orbit differences compared to IGS final products will be used for the analysis in Sect. 3.4. Although the focus of this study is on improving the GPS satellite orbits derived from limited ground networks, we also presented the quality of the GPS satellite orbits derived from a 62-station globally distributed network as a reference for interested readers. The network distribution and the GPS satellite orbit comparison with scenario 26 are given in the Appendix. Results and discussions Orbit comparison with IGS final products Statistical results of the GPS satellite orbit differences compared to the IGS final products of scenarios 1, 2, 7, 14, 19, and 26. The RMS of orbit differences in the along-track, the cross-track, and the radial directions are computed over epochs and satellites in each day. The 1D-mean RMS is computed over epochs, satellites, and the three orbital directions Improvements of the GPS satellite orbits derived by scenarios 2, 7, 14, 19, and 26 compared to scenario 1. The improvements are derived from 1D-mean RMS. Vertical lines indicate the averaged values Statistical results of the GPS satellite orbit differences compared to the IGS final products of the one-LEO scenarios in time series. The RMS of orbit difference is computed over epochs, satellites, and three orbital directions (along-track, cross-track, and radial) Based on the statistical results shown in Table 5, we will discuss the impact of the number of integrated LEOs and their orbital planes on the GPS satellite orbits. Except for the first two ground-based only solutions, different subsets of the seven LEOs are integrated with the 26-station network. Besides the mean and the standard deviation values of the orbit RMS time series listed in Table 5, the time series of scenarios 1, 2, 7, 14, 19, and 26 are shown in Fig. 8. Correspondingly, the time series of the 1D-mean orbit improvements of scenarios 2, 7, 14, 19, and 26 compared to scenario 1 is shown in Fig. 9. Generally, we observe improved GPS satellite orbits and reduced variations of the time series when increasing the number of ground stations or the integrated LEOs. The GPS satellite orbit accuracy improves most when all the seven LEOs are integrated into the POD. In all scenarios, the orbit accuracy of the three directions is ranked as along-track < cross-track < radial, while the orbit improvements in the three directions are ranked in the reverse order (along-track > cross-track > radial). With only three LEOs integrated, the determined GPS satellite orbits of scenario 19 (\(28\%\) improvement) are slightly better than those of scenario 2 (\(27\%\) improvement) which includes seven well-selected additional ground stations, with a stronger improvement mainly in the along-track direction. There are two peaks in all the plots. One is on DOY 196, and the other one is on DOY 209 and 210. These three days are presented as orange dots in Fig. 3. After checking the residuals, we realized that the large RMS is caused by large errors in code measurements of a ground station (GODN). Since our data editing strategy is based on the residuals of the phase measurements, the station GODN with large residuals in its code measurements was not excluded. The GPS orbit improvements for these three days are more significant (about \(50\%\) to \(82\%\) in different scenarios) than for the other days (about \(10\%\) to \(35\%\)), and with only one LEO included, the improvement is close to the scenario including seven additional stations. With only one LEO integrated (scenarios 3 to 9), the solutions are similar, for example, the 1D-mean RMS values vary slightly from 30.6 mm to 31.7 mm. Thus, compared to the 26-station only solution, the orbit improvements vary from \(14\%\) to \(17\%\). However, the standard deviations of the RMS of these one-LEO scenarios have larger differences (up to 4 mm in 1D-mean). Seen from Fig. 10, there is no systematic difference between these one-LEO scenarios. The impact of different LEOs on the derived GPS satellite orbits is not visible. Comparing the values given in Table 5 by considering the different LEO subsets, we find some phenomenons. In scenarios 10 to 15, two LEOs are included in the estimation. If the additional LEO is in the same orbital plane as the first one, the GPS orbit accuracy improves only by about 1 mm compared to the one-LEO scenarios (see scenarios 3 and 4 versus 10; 5 and 6 versus 11; 7 and 8 versus 12). Thus, the GPS orbit improvements compared to scenario 1 remain below \(20\%\) (\(16\%\) to \(19\%\)). However, if the LEOs are flying in two different orbital planes, the orbit improvements compared to scenario 1 increase up to \(24\%\), and the 1D-mean RMS values of the GPS orbits decrease to around 28 mm. By increasing the number of the integrated LEOs, the impact of the space-based observations and the LEO orbital planes on the derived GPS satellite orbits is getting more obvious. Figure 11 shows the orbit improvements sorted with respect to the numbers of LEOs (upper) and the numbers of orbital planes (lower). The number of integrated LEOs is marked with yellow dots, and the number of different orbital planes is represented by colored bars. Seen from the upper plot, GPS satellite orbits improve generally by integrating more LEOs. However, the improvement does not correspond strictly to the increasing number of LEOs. For example, scenario 20 (with four LEOs in two orbital planes) includes one more LEO than scenario 19 (with three LEOs in three orbital planes), but the GPS orbit improvement of it is smaller (\(25\%\) against \(31\%\)). This phenomenon happens also to the comparison between scenario 22 (with four LEOs in four orbital planes) and scenario 23 (with four LEOs in three orbital planes). When we sort the results by the number of LEO orbital planes, a clear trend is visible. One can see the increasing GPS orbit improvement related to the increasing number of LEO orbital planes from the lower plot of Fig. 11. In summary, the LEO orbital geometry is more critical for the improvement of the GPS satellite orbits than the number of space-based observations. The positive effect of different LEO orbit geometries to the geocenter estimation is also given by some other studies, for example, the simulation study of the LEOs+GPS combined processing for geocenter estimation by Kuang et al. (2015) and the real data study on the geocenter variations derived from combined processing of the ground- and space-based GPS observations by Männel and Rothacher (2017). GPS satellite orbit improvements compared to scenario 1. The improvements are sorted with respect to the number of integrated LEOs (upper) and the number of LEO orbital planes (lower) Internal comparison of the orbits In this section, we will discuss the overlaps and the day-boundary epochs of the GPS satellite orbits derived from scenarios 1, 2, and 26. Due to the excluded days described in Sect. 2.1 and the overlapping processing strategy introduced in Sect. 2.4, only 65 pairs of overlapping orbits with a 6-hour arc length are available for the comparison. Figure 12 shows the 1D-mean RMS of the differences between the overlapping orbits. Seen from the time series of the three scenarios, the differences of the overlapping orbits are ranked as scenarios \(1>2>26\), and the mean values and the standard deviations of the overlapping orbit differences computed over 65 days are 57/27 mm, 44/19 mm and 38/10 mm. There are 92 day-boundary epochs between the processed 112 days. The GPS satellite position differences in these day-boundary epochs are plotted in Fig. 13. The mean values and the standard deviations of the results computed over the 92 epochs are 76/25 mm, 55/12 mm, and 50/8 mm in scenarios 1, 2, and 26, respectively. This plot agrees with the comparison of the overlapping orbits in Fig. 12 and the external orbit comparison in Fig. 8. The outliers in Figs. 12 and 13 are caused by the observation errors of station GODN which have been mentioned in the last section. RMS of the differences between the 6-hour overlapping GPS satellite orbits computed over satellites and three orbital directions. The horizontal lines are the mean values of the time series. RMS of the GPS satellite position differences computed over satellites and three orbital directions at the day-boundary epoch between two 24-h arcs. The horizontal lines are the mean values of the time series Geolocated visualization of orbit comparison In Sect. 2.3, we explained that the seven additional stations in scenario 2 were selected in the regions with few stations in scenario 1. For the analysis regarding station distributions, the GPS satellite orbit improvements of scenarios 2 and 19 compared to scenario 1 are projected to the surface of the Earth. Based on the epoch-wise orbit difference of each GPS satellite compared to the IGS final products, we computed the improvements of the GPS satellite orbits of scenarios 2 and 19 compared to scenario 1 with a 900-second sampling rate for all GPS satellites in 112 days (approximate 344,064 epoch-wise solutions). The results are presented in Figs. 14 and 15. In these two figures, the potential GPS satellite position area is divided into geographical \(2^{\circ }\times 2^{\circ }\) bins (10,260 in total). We computed the average of all the epoch-wise solutions located in the same bin. These geolocated statistical results are presented as the color of the corresponding bins. Green means the satellite orbits are closer to the IGS final products (improvement), and red means getting further (degradation). Additionally, the ground tracks of GPS satellites are also visible in the plots. GPS satellite orbit improvements of scenarios 2 w.r.t. scenario 1. IGS final products are reference. The color of each bin presents the average value of the epoch-wise solutions located in the bin. The unit of the color bar is [ mm] GPS satellite orbit improvements of scenarios 19 w.r.t. scenario 1. IGS final products are reference. The color of each bin presents the average value of the epoch-wise solutions located in the bin. The unit of the color bar is [ mm] Density distributions of all the epoch-wise solutions of satellite orbit improvements from scenario 1 to scenario 2 (red) and 19 (green). Positive means getting closer to the IGS final products In general, with seven well-selected additional stations (scenario 2) or three LEOs (scenario 19), the GPS satellite orbits improve globally (as indicated by the green bins). The improvements are more clearly presented in Fig. 16. The density distributions of all the epoch-wise solutions from both comparisons are mainly positive. However, there are still regions without significant improvement (as indicated by the yellow bins), and there are only a few bins in red with degradation caused by the additional observations. Comparing Figs. 14 and 15, there are more dark-green bins and fewer red bins in the plot of scenario 19. Correspondingly, the density distribution of the epoch-wise solutions of scenario 19 is located on the right of that of scenario 2 in Fig. 16. Therefore, compared to scenario 1, the GPS satellite orbits derived in scenario 19 improve more than those of scenario 2. Particularly in some regions of the Pacific Ocean, the Indian Ocean, and Africa, seen from the color of the bins, the improvement of scenario 19 is more significant than that of scenario 2. In summary, to a sparsely and non-homogeneously distributed network of ground stations, the derived GPS satellite orbits are improved more by supplementing the network with three LEOs in different orbital planes than with seven well located additional stations, especially for the orbit arcs above the regions lacking stations. Results about regional station network To show the potential benefits of supplementing a regionally distributed ground network by integrating LEOs, we selected an additional subset of the available IGS stations. Figure 17 represents the network with five stations in China and another five stations in other regions. The figure shows that about two-thirds of potential GPS satellite positions (\(2^{\circ }\times 2^{\circ }\) resolution, 20,200 km altitude) can be observed by only two or even fewer stations. The GPS-only and seven-LEO-integrated POD were performed with this network. The 1D-mean RMS of the GPS satellite orbit differences compared to the IGS final products are presented in Fig. 18. Enhanced by seven LEOs, the 1D-mean RMS decreases significantly from about 25 cm to 4 cm. Also, the variations of the time series are reduced significantly from about 4.3 cm to 0.7 cm. The GPS orbit improvement by integrating LEOs to a regional ground network was also demonstrated by Wang et al. (2016) with seven stations within China and three LEOs (GRACE-A/B and FengYun-3C). We also performed a test of just using five stations in China. To get an acceptable result, the number of observations should be increased by expending the arc length to three days and increasing the sampling rate to 30 seconds. The derived GPS satellite orbits differ from the IGS final products by about 20 cm in 1D-mean RMS, but the LEO orbits degrade significantly. Further studies should be done to improve the solution in this situation. A subset of the available IGS stations including five stations in China and five stations in other regions. The station visibility from a potential GPS satellite position (Depth of Coverage) is presented as a colored bin (\(2^{\circ }\times 2^{\circ }\) resolution, 20,200 km altitude) GPS satellite orbit RMS from POD with and without LEOs (comparison against IGS final products) It is a potential way to improve the GPS satellite orbits by including LEOs in the POD processing due to the additional observations and geometries offered by the LEOs, especially when there is no additional station available. The benefit of integrating LEOs into the POD is convincing for a sparse or regional network. The GPS satellite orbits are improved more by supplementing a sparse ground network with LEOs than with comparable numbers of additional stations. By integrating three LEOs in three different orbital planes into the POD, the determined GPS satellite orbits (25.1 mm 1D-mean RMS compared to the IGS final products) are more accurate than those of the scenario with seven carefully selected additional ground stations (26.7 mm 1D-mean RMS). The benefits of adding LEOs do not correspond strictly to the number of the integrated LEOs but the diversity of their orbit planes. With the LEOs in different orbital planes, the GPS satellite orbits are improved. Ground stations might bring some undetectable outliers in the observations, especially in sparse networks with less redundancy. In general, the effect of these bad observations can be reduced with more ground stations or LEOs. The mitigation with LEOs introduced is more significant than with more ground stations added. By integrating seven LEOs, the GPS satellite orbits derived from a 10-station and regional ground network are improved impressively with decreased 1D-mean RMS compared to IGS final products from about 25 cm to 4 cm. The impact of LEO orbit modeling quality on derived GPS satellite orbits is not discussed in this study. The impact of different characteristics of the LEO orbits on the integrated POD is a topic for further studies. A Correction to this paper has been published: https://doi.org/10.1007/s00190-021-01557-x Berger C, Biancale R, Barlier F, Ill M (1998) Improvement of the empirical thermospheric model DTM: DTM94: a comparative review of various temporal variations and prospects in space geodesy applications. J Geod 72(3):161–178. https://doi.org/10.1007/s001900050158 Blewitt G (1990) An automatic editing algorithm for GPS data. Geophys Res Lett 17(3):199–202. https://doi.org/10.1029/GL017I003P00199 Case K, Kruizinga G, Wu S (2002) Grace level 1b data product user handbook. Technical report, JPL Choi KK (2014) Status of core products of the international gnss service. AGU Fall Meeting Abstr 2014:G21A–0426 Dumont J, Rosmorduc V, Picot N, Desai S, Bonekamp H, Figa J, Lillibridge J, Scharroo R (2009) Ostm/jason-2 products handbook. Technical representative, CNES and EUMETSAT and JPL and NOAA/NESDIS Dumont J, Rosmorduc V, Carrere L, Picot N, Bronner E, Couhert A, Guillot A, Desai S, Bonekamp H, Figa J et al (2016) Jason-3 products handbook. Technical representative, CNES and EUMETSAT and JPL and NOAA/NESDIS Friis-Christensen E, Lühr H, Knudsen D, Haagmans R (2008) Swarm: an earth observation mission investigating geospace. Adv Space Res 41(1):210–216. https://doi.org/10.1016/J.ASR.2006.10.008 Geng JH, Shi C, Zhao QL, Ge MR, Liu JN (2008) Integrated adjustment of LEO and GPS in precision orbit determination. Int Assoc Geod Symp 132:133–137. https://doi.org/10.1007/978-3-540-74584-6_20 Gutierrez P (2018) Brexit and Galileo - Plenty of rumblings, but Where's the Beef? https://insidegnss.com/brexit-and-galileo-plenty-of-rumblings-but-wheres-the-beef/, online article of insidegnss.com Hackel S, Steigenberger P, Hugentobler U, Uhlemann M, Montenbruck O (2015) Galileo orbit determination using combined GNSS and SLR observations. GPS Solut 19(1):15–25. https://doi.org/10.1007/s10291-013-0361-5 Haines B, Bar-Sever Y, Bertiger W, Desai S, Willis P (2004) One-centimeter orbit determination for Jason-1: new GPS-based strategies. Mar Geod. https://doi.org/10.1080/01490410490465300 Jäggi A, Hugentobler U, Bock H, Beutler G (2007) Precise orbit determination for GRACE using undifferenced or doubly differenced GPS data. Adv Space Res 39(10):1612–1619. https://doi.org/10.1016/j.asr.2007.03.012 Johnston G, Neilan R, Craddock A, Dach R, Meertens C, Rizos C (2018) The international GNSS service 2018 update. In: EGU General Assembly Conference Abstracts, vol 20, p 19675 König D (2018) A terrestrial reference frame realised on the observation level using a GPS-LEO satellite constellation. J Geod. https://doi.org/10.1007/s00190-018-1121-7 König R, Reigber C, Zhu S (2005) Dynamic model orbits and Earth system parameters from combined GPS and LEO data. Adv Space Res 36(3):431–437. https://doi.org/10.1016/J.ASR.2005.03.064 Kuang D, Bar-Sever Y, Haines B (2015) Analysis of orbital configurations for geocenter determination with GPS and low-Earth orbiters. J Geod 89(5):471–481. https://doi.org/10.1007/s00190-015-0792-6 Lambin J, Morrow R, Fu LL, Willis JK, Bonekamp H, Lillibridge J, Perbos J, Zaouche G, Vaze P, Bannoura W, Parisot F, Thouvenot E, Coutin-Faye S, Lindstrom E, Mignogno M (2010) The OSTM/Jason-2 mission. Mar Geod 10(1080/01490419):491030 Liu J, Ge M (2003) PANDA software and its preliminary result of positioning and orbit determination. Wuhan Univ J Nat Sci 8(2):603–609. https://doi.org/10.1007/BF02899825 Männel B, Rothacher M (2017) Geocenter variations derived from a combined processing of LEO- and ground-based GPS observations. J Geod 91(8):933–944. https://doi.org/10.1007/s00190-017-0997-y Marshall JA, Antreasian PG, Rosborough GW, Putney BH (1992) Modeling radiation forces acting on satellites for precision orbit determination. Adv Astron Sci 76(pt 1):73–96. https://doi.org/10.2514/3.26408 Ménard Y, Fu LL, Escudier P, Parisot F, Perbos J, Vincent P, Desai S, Haines B, Kunstmann G (2003) The jason-1 mission special issue: Jason-1 calibration/validation. Mar Geod 26(3–4):131–146. https://doi.org/10.1080/714044514 Montenbruck O, Gill E (2000) Satellite orbits: models, methods, and applications, 1st edn. Springer, Berlin. https://doi.org/10.1007/978-3-642-58351-3 Montenbruck O, Hackel S, van den Ijssel J, Arnold D (2018) Reduced dynamic and kinematic precise orbit determination for the Swarm mission from 4 years of GPS tracking. GPS Solut 22(3):79. https://doi.org/10.1007/s10291-018-0746-6 Nischan T (2016) GFZRNX-RINEX GNSS data conversion and manipulation toolbox (Version 1.05). GFZ Data Services. https://doi.org/10.5880/GFZ.1.1.2016.002 Olsen PEH (2019) Swarm l1b product definition. National Space Institute Technical University of Denmark, Technical report Otten M, Flohrer C, Springer T, Enderle W (2012) Multi-technique combination at observation level with napeos: combining GPS, GLONASS and leo satellites. In: EGU general assembly conference abstracts, vol 14, p 7925 Petit G, Luzum B (2010) IERS conventions (2010). Technical report, Bureau international des poids et mesures sevres (France) Rebischung P, Griffiths J, Ray J, Schmid R, Collilieux X, Garayt B (2012) IGS08: the IGS realization of ITRF2008. GPS Solut 16(4):483–494. https://doi.org/10.1007/s10291-011-0248-2 Reigber C, Schmidt R, Flechtner F, König R, Meyer U, Neumayer KH, Schwintzer P, Zhu SY (2005) An earth gravity field model complete to degree and order 150 from grace: Eigen-grace02s. J Geodyn 39(1):1–10. https://doi.org/10.1016/j.jog.2004.07.001 Sakic P, Männel B, Nischan T (2018) Operational geodetic products determination and combination for Galileo. In: EGU general assembly conference abstracts, vol 20, p 17515 Schmid R, Dach R, Collilieux X, Jäggi A, Schmitz M, Dilssner F (2016) Absolute IGS antenna phase center model igs08.atx: status and potential improvements. J Geod 90(4):343–364 Springer T, Beutler G, Rothacher M (1999) A new solar radiation pressure model for GPS satellites. GPS Solut 2(3):50–62. https://doi.org/10.1007/PL00012757 Standish EM (1998) JPL planetary and lunar ephemerides, DE405/LE405. Jpl Iom 312F-98-048 Tapley BD, Bettadpur S, Ries JC, Thompson PF, Watkins MM (2004) GRACE measurements of mass variability in the Earth system. Science. https://doi.org/10.1126/science.1099192 Vaze P, Neeck S, Bannoura W, Green J, Wade A, Mignogno M, Zaouche G, Couderc V, Thouvenot E, Parisot F (2010) The Jason-3 mission: completing the transition of ocean altimetry from research to operations. In: Sensors, systems, and next-generation satellites XIV, International Society for Optics and Photonics, vol 7826, p 78260Y Wang L, Zhang Q, Huang G, Yan X, Qin Z (2016) Combining regional monitoring stations with space-based data to determine the MEO satellite orbit. Acta Geod Cartograph Sin 45(S2):101 Wu SC, Yunck TP, Thornton CL (1991) Reduced-dynamic technique for precise orbit determination of low earth satellites. J Guid Control Dyn 14(1):24–30. https://doi.org/10.2514/3.20600 Yang Y (2018) Introduction to BeiDou-3 navigation satellite system. Presented in IGS workshop October 2018, Wuhan, China Zhao Q, Wang C, Guo J, Yang G, Liao M, Ma H, Liu J (2017) Enhanced orbit determination for BeiDou satellites with FengYun-3C onboard GNSS data. GPS Solut 21(3):1179–1190. https://doi.org/10.1007/s10291-017-0604-y Zhao Q, Wang C, Guo J, Wang B, Liu J (2018) Precise orbit and clock determination for BeiDou-3 experimental satellites with yaw attitude analysis. GPS Solut 22(1):4. https://doi.org/10.1007/s10291-017-0673-y Zhu S, Reigber C, König R (2004) Integrated adjustment of CHAMP, GRACE, and GPS data. J Geod 78(1–2):103–108. https://doi.org/10.1007/s00190-004-0379-0 Zoulida M, Pollet A, Coulot D, Perosanz F, Loyer S, Biancale R, Rebischung P (2016) Multi-technique combination of space geodesy observations: impact of the Jason-2 satellite on the GPS satellite orbits estimation. Adv Space Res 58(7):1376–1389. https://doi.org/10.1016/j.asr.2016.06.019 Author Wen Huang was financially supported by Chinese Scholarship Council. The merged data for overlapping comparison is processed by GFZRNX software (Nischan 2016). The data of GRACE, Jason and Swarm satellites are provided by ISDC (ftp://isdcftp.gfz-potsdam.de), AVISO (https://www.aviso.altimetry.fr) and ESA (ftp://swarm-diss.eo.esa.int), respectively. GPS ground observations and related products are provided by IGS (http://www.igs.org/products). The GPS NANU message is offered by the U.S. Coast Guard Navigation Center. Author contributions: Wen Huang designed the research and wrote the paper; Benjamin Männel contributed to the experiment design and the results analysis; Pierre Sakic contributed to the station network selection and the results analysis; Maorong Ge contributed to the software programming and the theoretical considerations; All the authors joined the research discussion and data analysis, and gave comments for the paper writing. Deutsches GeoForschungsZentrum GFZ, Telegrafenberg, 14473, Potsdam, Germany Wen Huang, Benjamin Männel, Pierre Sakic, Maorong Ge & Harald Schuh Institute of Geodesy and Geoinformation Science, Technische Universität Berlin, Strasse des 17. Juni 135, Berlin, 10623, Germany Wen Huang, Maorong Ge & Harald Schuh Wen Huang Benjamin Männel Pierre Sakic Maorong Ge Harald Schuh Correspondence to Wen Huang. See Figs. 19 and 20. A dense subset of the IGS ground stations with 62 globally distributed stations Statistical results of the GPS satellite orbit differences w.r.t. the IGS final products of 62-station scenario and 26-station+7-LEO scenario. The RMS of orbit differences in the along-track, the cross-track, and the radial directions are computed over epochs and satellites in each day. The 1D-mean RMS is computed over epochs, satellites, and the three orbital directions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Huang, W., Männel, B., Sakic, P. et al. Integrated processing of ground- and space-based GPS observations: improving GPS satellite orbits observed with sparse ground networks. J Geod 94, 96 (2020). https://doi.org/10.1007/s00190-020-01424-1 Integrated processing Sparse ground network
CommonCrawl
Astronomy Stack Exchange is a question and answer site for astronomers and astrophysicists. It only takes a minute to sign up. What is the density profile within the Sun's photosphere? Which one of these is wrong? The Sun's photosphere contains the Sun's surface as defined by opacity = 2/3 point. I'd like to see the profile of mass density from bottom to top of the photosphere. I did a quick search and got confused. The image below is found on the Wikipedia Photosphere page. If I trace the dotted line labeled "Density" to the photosphere layer and read the density axis at the top, I read something like $8 \text{ to } 3 \times 10^{-7} \text{ g/cm}^{-3}$, which you could call $1 \times 10^{-6} \text{ g/cm}^{-3}$. However, the Sun section of Wikipedia page where this image is shown says: The Sun's photosphere has a temperature between 4,500 and 6,000 K (4,230 and 5,730 °C) (with an effective temperature of 5,777 K (5,504 °C)) and a density of about 1×10−6 kg/m3; increasing with depth into the sun. and links to the solar-center.stanford.edu page The Sun's Vital Statistics for the $1 \times 10^{-6} \text{ kg/m}^{-3}.$ Converting the units, this is only $1 \times 10^{-9} \text{ g/cm}^{-3}.$ Question: Is it possible to clear up this disparity, and to see a plot of the density versus depth from the bottom to the top of the Sun's photosphere, which would likely contain both positive and negative heights above the Sun's surface? Source: File:Sun Atmosphere Temperature and Density SkyLab.jpg Original source: SP-402 A New Sun: The Solar Results From Skylab the-sun stellar-atmospheres photosphere $\begingroup$ companion question: At what depth below the Sun's surface does the density reach that of water? $\endgroup$ I usually don't answer my own questions, but sometimes when the question itself is called into question I make an exception. The density of the photosphere at $\tau_{5000}=1$ is predicted to be $3 \times 10^{-7} \text{g/cm}^3$ in the Holweger-Müller Model Atmosphere7. As pointed out in comments there is a spread in values here. The 1E-06 g/cm^3 density value (plot) is more consistent with 𝜏 = 1 or "bottom" of the photosphere, while the density in the quote is more consistent with the cooler "top" of the photosphere (circa 4300 K). From Chapter 2: The Photosphere of Timo Nieminen's thesis Solar Line Asymmetries: Modelling the Effect of Granulation on the Solar Spectrum Figure 2-3: The Holweger-Müller Model Atmosphere 7 Holweger, H. and Müller, E. A. "The Photospheric Barium Spectrum: Solar Abundance and Collision Broadening of Ba II Lines by Hydrogen", Solar Physics 39, pg 19-30 (1974). Extra points have been cubic spline interpolated by J. E. Ross. The optical properties (such as the optical depth and the opacity) of a model atmosphere are, obviously, very important, and will be considered later. See table C-4 for complete details of the Holweger-Müller model atmosphere including all depth points used. 8The height scale is not arbitrary. The base of the photosphere (height = 0 km) is chosen to be at standard optical depth of one (i.e. 𝜏 5000Å = 1 ). $\begingroup$ The HM atmosphere perfectly well shows that the "photosphere" covers a range of temperatures and densities. The density at the temperature minimum is around $10^{-9}$ g/cc and increases as you go inwards. The quote and picture are consistent. $\endgroup$ – ProfRob $\begingroup$ The quote is open to misinterpretation, but it isn't incorrect if the photosphere is defined by that range of temperature. $\endgroup$ $\begingroup$ @RobJeffries I'd meant to delete that; I went off and made an edit, how does it look now? $\endgroup$ The density is what you read from the graph, correctly. Don't worry about what it says in that quote, it's all just a matter of what is meant by the "photosphere", a term that is rather vaguely defined and used to mean different things in different places. You can see the problem in the temperatures used in that quote-- they correspond to what the graph considers to be entirely above the photosphere. The quote seems to think of the photosphere as the region from the tau ~ 2/3 point to the minimum in the temperature, whereas the graph seems to think of the photosphere as something quite noticeably hotter. Other places regard the photosphere as a shell of zero width, right where tau ~ 2/3. It's all just the different ways the word is used, there's nothing to worry about. The graph matches up density with height and temperature, so you can just use that-- and note that even that is a kind of average situation, the reality is much more complicated. As for positive and negative heights, why would you care what point is getting called x=0? It's completely arbitrary where the zero height is set, every different source could likely use a different meaning for "the top of the photosphere." Ken GKen G $\begingroup$ So you are sure it's 1E-06 g/cm^3 and not 1E-09 g/cm^3? Can you provide an authoritative, independent source to support your conclusion? $\endgroup$ $\begingroup$ Your "why would you care what point is getting called x=0?" is inappropriate. A plot of density versus depth needs a reference point. The gradient is so steep in this region that it would be absurd to measure from the center of the Sun. Instead, the "surface" or "x=0" is a much better point of reference in this particular case. $\endgroup$ $\begingroup$ have a look at this answer where all data is referenced to the Sun's surface. $\endgroup$ $\begingroup$ The location ofthe center of the Sun is of no interest here. As for support for my statement, the graph itself shows where those two densities appear. So it is simply different language about what constitutes the "photosphere." To wit, the quoted statement is talking about the density at the temperature minimum, so it is clear they regard the photosphere as extending up to the bottom of the "chromosphere", which starts at the temperature minimum. Also, there is no clear meaning to "the Sun's surface", because the Sun is a gas. So it's all pure semantics, the graph says all you need. $\endgroup$ – Ken G $\begingroup$ I'm primarily looking for an answer to the question "What is the density profile within the Sun's photosphere?" The "Which one of these is wrong?" is less important; it's okay if they are both wrong. The accepted answer to my question will likely show or link to a mass density profile of the Sun's photosphere as defined by some model. "You don't want to know the answer to your question" or "Astrophysicists can't agree where the photosphere begins and ends" probably won't be accepted. $\endgroup$ Thanks for contributing an answer to Astronomy Stack Exchange! Not the answer you're looking for? Browse other questions tagged the-sun stellar-atmospheres photosphere or ask your own question. At what depth below the Sun's surface does the density reach that of water? Is it ever possible to see Earth's shadow on other planets? Accuracy of Mercury transit calculations How to understand exactly why gravity darkening happens on rotating stars? Density of a sunspot compared to the surrounding photosphere? Precision in the measurement of the distance to the Sun Does the Sun's atmosphere have a scale height? Mass of sun's core What is the air pressure in the heliosphere (Sun's atmosphere)? What is the main source of opacity in sunspots? What forces expelled these huge clouds, then blocked further progress, yet allowed it to maintain its threads? Why is sun's photosphere a million times less dense than air at the surface of the Earth? absorption line from the chromosphere How else can a star form, other than gravitational collapse?
CommonCrawl